unicorn
unicorn copied to clipboard
Repeated memory write callbacks
Hello. While tracking memory writes in heavily packed executable to some memory region I've found that in some cases the callback is triggered once for 8 byte write and at the same time 8 times for 1 byte write. Unfortunately I cannot reproduce this behavior on simple example, so maybe someone could point on what is the possible error might be. Example trace:
write size, value, disassembly
size: 8 value: 0xfffffffffffff7ff 0x1400cd26d: mov qword ptr [r10], r11
size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11
size: 1 value: 0x00000000000000f7 0x1400cd26d: mov qword ptr [r10], r11
size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11
size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11
size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11
size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11
size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11
size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11
size: 8 value: 0xffffffff000165a1 0x140007fad: push rdi
size: 4 value: 0x00000000fffcb99d 0x140007fae: xor dword ptr [rsp], eax
size: 8 value: 0x00000001400aae62 0x14009194e: push rbx
You can see that only size: 8 value: 0xfffffffffffff7ff 0x1400cd26d: mov qword ptr [r10], r11 should be present but for some reason it triggers callback 8 more times.
If the address is badly aligned, then a read/write is probably split into multiple aligned read/write. In worst cases on some arch, this means byte by byte read/write.
From: Pavel @.> Sent: Wednesday, May 25, 2022 4:17:03 PM To: unicorn-engine/unicorn @.> Cc: Subscribed @.***> Subject: [unicorn-engine/unicorn] Repeated memory write callbacks (Issue #1627)
Hello. While tracking memory writes in heavily packed executable to some memory region I've found that in some cases the callback is triggered once for 8 byte write and at the same time 8 times for 1 byte write. Unfortunately I cannot reproduce this behavior on simple example, so maybe someone could point on what is the possible error might be. Example trace:
write size, value, disassembly size: 8 value: 0xfffffffffffff7ff 0x1400cd26d: mov qword ptr [r10], r11 size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11 size: 1 value: 0x00000000000000f7 0x1400cd26d: mov qword ptr [r10], r11 size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11 size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11 size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11 size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11 size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11 size: 1 value: 0x00000000000000ff 0x1400cd26d: mov qword ptr [r10], r11 size: 8 value: 0xffffffff000165a1 0x140007fad: push rdi size: 4 value: 0x00000000fffcb99d 0x140007fae: xor dword ptr [rsp], eax size: 8 value: 0x00000001400aae62 0x14009194e: push rbx
You can see that only size: 8 value: 0xfffffffffffff7ff 0x1400cd26d: mov qword ptr [r10], r11 should be present but for some reason it triggers callback 8 more times.
― Reply to this email directly, view it on GitHubhttps://github.com/unicorn-engine/unicorn/issues/1627, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHJULO34FXMC4NQZS4UU37TVLYY57ANCNFSM5W5JNEVA. You are receiving this because you are subscribed to this thread.Message ID: @.***>
@wtdcode thanks for the response! Indeed, the address is unaligned in all those cases. For example 0x2ede is the write address from example above. I did expect to see 1 write callback for mov qword ptr [mem], reg but I guess I need to handle this case as well.
Hi, I ran into this issue as well. The problem isn't that the write gets split up byte-by-byte, the problem is that we see a HOOK_MEM callback for the whole word and then another for each individual byte. This is a problem if you are expecting to see a certain sequence of writes - we can hack around the byte-by-byte behavior, but we can't hack around duplication.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 15 days.