Yasuo Ohgaki wrote:
Added yet another function to compare suggested by Lester.
bool str_word_compare(str, str)
This function compares data word by word rather than byte by byte.
It supposed to be faster for large data.
Yasuo ... you seem to be missing the point, and I'm not quite sure where all these different hash compares come from but ...
bool memequ() should only return true or false and simply compare 'word by word' be that 32bit or 64bit words. There should be nothing in the macro than an word XOR, and buffers should be padded to the right boundary. It is the processing of unnecessary byte related data which is adding back in your 'insecurity', but it is also adding extra unnecessary processing.
The point about 'memequ' is that it should both speed up any use of memcpy where there is no need to return anything other than a match ... it is the subsequent search to find the '</>' byte which even introduces the timing attack ... and it also makes these compares unicode safe! So that it is only the compares that need the fine result that then need handling for a full unicode installation.
--
Lester Caine - G8HFL
-----------------------------
Contact - http://lsces.co.uk/wiki/?page=contact
L.S.Caine Electronic Services - http://lsces.co.uk
EnquirySolve - http://enquirysolve.com/
Model Engineers Digital Workshop - http://medw.co.uk
Rainbow Digital Media - http://rainbowdigitalmedia.co.uk