2017-02-19 20:27:15 +00:00
|
|
|
digraph {
|
|
|
|
|
|
|
|
/* could shave a RTT by committing to the nameplate early, before
|
|
|
|
finishing the rest of the code input. While the user is still
|
|
|
|
typing/completing the code, we claim the nameplate, open the mailbox,
|
|
|
|
and retrieve the peer's PAKE message. Then as soon as the user
|
|
|
|
finishes entering the code, we build our own PAKE message, send PAKE,
|
|
|
|
compute the key, send VERSION. Starting from the Return, this saves
|
|
|
|
two round trips. OTOH it adds consequences to hitting Tab. */
|
|
|
|
|
|
|
|
start [label="Key\nMachine" style="dotted"]
|
|
|
|
|
2017-03-22 23:08:18 +00:00
|
|
|
/* two connected state machines: the first just puts the messages in
|
|
|
|
the right order, the second handles PAKE */
|
|
|
|
|
|
|
|
{rank=same; SO_00 PO_got_code SO_10}
|
|
|
|
{rank=same; SO_01 PO_got_both SO_11}
|
|
|
|
SO_00 [label="S00"]
|
|
|
|
SO_01 [label="S01: pake"]
|
|
|
|
SO_10 [label="S10: code"]
|
|
|
|
SO_11 [label="S11: both"]
|
|
|
|
SO_00 -> SO_01 [label="got_pake\n(early)"]
|
|
|
|
SO_00 -> PO_got_code [label="got_code"]
|
|
|
|
PO_got_code [shape="box" label="K1.got_code"]
|
|
|
|
PO_got_code -> SO_10
|
|
|
|
SO_01 -> PO_got_both [label="got_code"]
|
|
|
|
PO_got_both [shape="box" label="K1.got_code\nK1.got_pake"]
|
|
|
|
PO_got_both -> SO_11
|
|
|
|
SO_10 -> PO_got_pake [label="got_pake"]
|
|
|
|
PO_got_pake [shape="box" label="K1.got_pake"]
|
|
|
|
PO_got_pake -> SO_11
|
|
|
|
|
2017-02-19 20:27:15 +00:00
|
|
|
S0 [label="S0: know\nnothing"]
|
2017-02-23 00:56:39 +00:00
|
|
|
S0 -> P0_build [label="got_code"]
|
2017-02-19 20:27:15 +00:00
|
|
|
|
2017-02-22 01:56:32 +00:00
|
|
|
P0_build [shape="box" label="build_pake\nM.add_message(pake)"]
|
2017-02-19 20:27:15 +00:00
|
|
|
P0_build -> S1
|
|
|
|
S1 [label="S1: know\ncode"]
|
|
|
|
|
|
|
|
/* the Mailbox will deliver each message exactly once, but doesn't
|
|
|
|
guarantee ordering: if Alice starts the process, then disconnects,
|
|
|
|
then Bob starts (reading PAKE, sending both his PAKE and his VERSION
|
|
|
|
phase), then Alice will see both PAKE and VERSION on her next
|
|
|
|
connect, and might get the VERSION first.
|
|
|
|
|
|
|
|
The Wormhole will queue inbound messages that it isn't ready for. The
|
|
|
|
wormhole shim that lets applications do w.get(phase=) must do
|
|
|
|
something similar, queueing inbound messages until it sees one for
|
|
|
|
the phase it currently cares about.*/
|
|
|
|
|
2017-02-22 01:56:32 +00:00
|
|
|
S1 -> P_mood_scary [label="got_pake\npake bad"]
|
|
|
|
P_mood_scary [shape="box" color="red" label="W.scared"]
|
2017-03-22 23:08:18 +00:00
|
|
|
P_mood_scary -> S5 [color="red"]
|
|
|
|
S5 [label="S5:\nscared" color="red"]
|
2017-02-22 01:56:32 +00:00
|
|
|
S1 -> P1_compute [label="got_pake\npake good"]
|
|
|
|
#S1 -> P_mood_lonely [label="close"]
|
2017-02-19 20:27:15 +00:00
|
|
|
|
add w.when_key(), fix w.when_verified() to fire later
Previously, w.when_verified() was documented to fire only after a valid
encrypted message was received, but in fact it fired as soon as the shared
key was derived (before any encrypted messages are seen, so no actual
"verification" could occur yet).
This fixes that, and also adds a new w.when_key() API call which fires at the
earlier point. Having something which fires early is useful for the CLI
commands that want to print a pacifier message when the peer is responding
slowly. In particular it helps detect the case where 'wormhole send' has quit
early (after depositing the PAKE message on the server, but before the
receiver has started). In this case, the receiver will compute the shared
key, but then wait forever hoping for a VERSION that will never come. By
starting a timer when w.when_key() fires, and cancelling it when
w.when_verified() fires, we have a good place to tell the user that something
is taking longer than it should have.
This shifts responsibility for notifying Boss.got_verifier, out of Key and
into Receive, since Receive is what notices the first valid encrypted
message. It also shifts the Boss's ordering expectations: it now receives
B.happy() before B.got_verifier(), and consequently got_verifier ought to
arrive in the S2_happy state rather than S1_lonely.
2017-04-07 01:27:41 +00:00
|
|
|
P1_compute [label="compute_key\nM.add_message(version)\nB.got_key\nR.got_key" shape="box"]
|
2017-03-22 23:08:18 +00:00
|
|
|
P1_compute -> S4
|
2017-02-19 20:27:15 +00:00
|
|
|
|
2017-03-22 23:08:18 +00:00
|
|
|
S4 [label="S4: know_key" color="green"]
|
2017-02-19 20:27:15 +00:00
|
|
|
|
|
|
|
}
|