I did not analyze this further, but in an rather obscure case the following happens:
Trying to form a packet 0x1A, to send to a client, used to 'send' an object (item or container) to a client.
The serial has to be masked with 0x80000000, i.e. the highest bit should be set in the serial. In EScript this is a problem, because there is no unsigned integer. The solution (was suggested by a core dev, Racalac if I remember correctly) was always:
packet.SetInt32(3, (object.serial + 0x80000000) + 1);
In the resulting packet a serial of - from the actual example - 0x512D607B becomes 0xD12D607B, as it should be.
This worked fine with 32 bit compiled cores, and it continues to work fine under Windows, even with a x64 core!
Under Linux it does not work anymore, the result for the serial 0x512D607B is now 0xD12D607C!
But if I write
packet.SetInt32(3, (object.serial | 0x80000000));
it works! But it does not work this way under Windows x64 Core!
This is rather nasty (I will fix it by setting the serial in two steps with .SetInt16), because a script which works perfectly under an Windows running Core will fail under the same Core if it is compiled and runs under Linux!
I suspect (but did not check) that the integer compilation under Linux is different and a 64 bit integer is used, while the Windows compiler uses a 32 bit integer and wraps correctly.
All in all (despite a lot of needed changes because of the changed behaviour of 'item.name') the new core seems to run fine with our script base. I hope this is the last problem I find
After testing on both, Linux-Pol and Windows-Pol, same scripts, this is the solution for now, if anybody needs it:
Code: Select all
// Old way, does not work under Linux, but works under Windows (both x64) packet.SetInt32(3, (obj.serial + 0x80000000) + 1); // Workaround, works on both Linux and Windows x64 cores packet.SetInt16(3, (CInt(obj.serial / 65536) | 0x8000)); packet.SetInt16(5, obj.serial & 0xffff);