Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sounds like inverted hash chains. Generate a hash chain h_i = sha(h_i-1) for a random h_0 up to h_N. Each message m_i becomes (agent_id, state, hmac(h_(N-i+1), state), h_(N-i)). Upon receipt of message m_(i+1) the receiver can verify the hmac of the preceding state and that the key is the preimage of the earlier message. By repeated hashing any gaps in the sequence can be fast-forwarded through and still verified. This provides for deletion without a loss of verifiability up to length N.


I was thinking of something like this but where you only send the whole shebang like you mention every now and then, I'll call that a "sync point".

Between the sync points, each message only contains part of the hmac, proportional to the state size, and not h_(N-i). Ie if sending a single keystroke then a single byte from the hmac, if sending a few bytes then two bytes of hmac etc.

If the sequence is continuous you can still verify as you can regenerate h_(N-i) from the previous sync point. A deletion can then still be done by issuing a redact operation which deletes everything from the start of the secret up to the next sync point, and adding an operation which inserts back the data between the end of the secret to the next sync point.

My thought was that while finding a collision for a single keystroke message might not be hard, doing it for two messages in a row should be quite difficult. However it would "just" roughly double the size of the messages on average, rather than add dozens of bytes to each.

I was also thinking that h_0 should be based on data includes something that's not up to the individual actor, to make preimage attacks harder.

Anyway, way past bedtime and not my area at all so this might all be drivel, but interesting problem to ponder...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: