diff --git a/.gitignore b/.gitignore index 9db0baa..d1e78f0 100644 --- a/.gitignore +++ b/.gitignore @@ -58,4 +58,5 @@ target/ /twistd.pid /relay.sqlite /misc/node_modules/ -/docs/events.png +/.automat_visualize/ +/docs/state-machines/*.png diff --git a/docs/api.md b/docs/api.md index 0555c48..67ae8ac 100644 --- a/docs/api.md +++ b/docs/api.md @@ -1,14 +1,14 @@ # Magic-Wormhole -This library provides a primitive function to securely transfer small amounts +This library provides a mechanism to securely transfer small amounts of data between two computers. Both machines must be connected to the internet, but they do not need to have public IP addresses or know how to contact each other ahead of time. -Security and connectivity is provided by means of an "invitation code": a -short string that is transcribed from one machine to the other by the users -at the keyboard. This works in conjunction with a baked-in "rendezvous -server" that relays information from one machine to the other. +Security and connectivity is provided by means of an "wormhole code": a short +string that is transcribed from one machine to the other by the users at the +keyboard. This works in conjunction with a baked-in "rendezvous server" that +relays information from one machine to the other. The "Wormhole" object provides a secure record pipe between any two programs that use the same wormhole code (and are configured with the same application @@ -17,141 +17,64 @@ but the encrypted data for all messages must pass through (and be temporarily stored on) the rendezvous server, which is a shared resource. For this reason, larger data (including bulk file transfers) should use the Transit class instead. The Wormhole object has a method to create a Transit object -for this purpose. +for this purpose. In the future, Transit will be deprecated, and this +functionality will be incorporated directly as a "dilated wormhole". + +A quick example: + +```python +import wormhole +from twisted.internet.defer import inlineCallbacks + +@inlineCallbacks +def go(): + w = wormhole.create(appid, relay_url, reactor) + w.generate_code() + code = yield w.when_code() + print "code:", code + w.send(b"outbound data") + inbound = yield w.when_received() + yield w.close() +``` ## Modes -This library will eventually offer multiple modes. For now, only "transcribe -mode" is available. +The API comes in two flavors: Delegated and Deferred. Controlling the +Wormhole and sending data is identical in both, but they differ in how +inbound data and events are delivered to the application. -Transcribe mode has two variants. In the "machine-generated" variant, the -"initiator" machine creates the invitation code, displays it to the first -user, they convey it (somehow) to the second user, who transcribes it into -the second ("receiver") machine. In the "human-generated" variant, the two -humans come up with the code (possibly without computers), then later -transcribe it into both machines. +In Delegated mode, the Wormhole is given a "delegate" object, on which +certain methods will be called when information is available (e.g. when the +code is established, or when data messages are received). In Deferred mode, +the Wormhole object has methods which return Deferreds that will fire at +these same times. -When the initiator machine generates the invitation code, the initiator -contacts the rendezvous server and allocates a "channel ID", which is a small -integer. The initiator then displays the invitation code, which is the -channel-ID plus a few secret words. The user copies the code to the second -machine. The receiver machine connects to the rendezvous server, and uses the -invitation code to contact the initiator. They agree upon an encryption key, -and exchange a small encrypted+authenticated data message. - -When the humans create an invitation code out-of-band, they are responsible -for choosing an unused channel-ID (simply picking a random 3-or-more digit -number is probably enough), and some random words. The invitation code uses -the same format in either variant: channel-ID, a hyphen, and an arbitrary -string. - -The two machines participating in the wormhole setup are not distinguished: -it doesn't matter which one goes first, and both use the same Wormhole class. -In the first variant, one side calls `get_code()` while the other calls -`set_code()`. In the second variant, both sides call `set_code()`. (Note that -this is not true for the "Transit" protocol used for bulk data-transfer: the -Transit class currently distinguishes "Sender" from "Receiver", so the -programs on each side must have some way to decide ahead of time which is -which). - -Each side can then do an arbitrary number of `send()` and `get()` calls. -`send()` writes a message into the channel. `get()` waits for a new message -to be available, then returns it. The Wormhole is not meant as a long-term -communication channel, but some protocols work better if they can exchange an -initial pair of messages (perhaps offering some set of negotiable -capabilities), and then follow up with a second pair (to reveal the results -of the negotiation). - -Note: the application developer must be careful to avoid deadlocks (if both -sides want to `get()`, somebody has to `send()` first). - -When both sides are done, they must call `close()`, to flush all pending -`send()` calls, deallocate the channel, and close the websocket connection. - -## Twisted - -The Twisted-friendly flow looks like this (note that passing `reactor` is how -you get a non-blocking Wormhole): +Delegated mode: ```python -from twisted.internet import reactor -from wormhole.public_relay import RENDEZVOUS_RELAY -from wormhole import wormhole -w1 = wormhole(u"appid", RENDEZVOUS_RELAY, reactor) -d = w1.get_code() -def _got_code(code): - print "Invitation Code:", code - return w1.send(b"outbound data") -d.addCallback(_got_code) -d.addCallback(lambda _: w1.get()) -def _got(inbound_message): - print "Inbound message:", inbound_message -d.addCallback(_got) -d.addCallback(w1.close) -d.addBoth(lambda _: reactor.stop()) -reactor.run() +class MyDelegate: + def wormhole_got_code(self, code): + print("code: %s" % code) + def wormhole_received(self, data): # called for each message + print("got data, %d bytes" % len(data)) + +w = wormhole.create(appid, relay_url, reactor, delegate=MyDelegate()) +w.generate_code() ``` -On the other side, you call `set_code()` instead of waiting for `get_code()`: +Deferred mode: ```python -w2 = wormhole(u"appid", RENDEZVOUS_RELAY, reactor) -w2.set_code(code) -d = w2.send(my_message) -... +w = wormhole.create(appid, relay_url, reactor) +w.generate_code() +def print_code(code): + print("code: %s" % code) +w.when_code().addCallback(print_code) +def received(data): + print("got data, %d bytes" % len(data)) +w.when_received().addCallback(received) # gets exactly one message ``` -Note that the Twisted-form `close()` accepts (and returns) an optional -argument, so you can use `d.addCallback(w.close)` instead of -`d.addCallback(lambda _: w.close())`. - -## Verifier - -For extra protection against guessing attacks, Wormhole can provide a -"Verifier". This is a moderate-length series of bytes (a SHA256 hash) that is -derived from the supposedly-shared session key. If desired, both sides can -display this value, and the humans can manually compare them before allowing -the rest of the protocol to proceed. If they do not match, then the two -programs are not talking to each other (they may both be talking to a -man-in-the-middle attacker), and the protocol should be abandoned. - -To retrieve the verifier, you call `d=w.verify()` before any calls to -`send()/get()`. The Deferred will not fire until internal key-confirmation -has taken place (meaning the two sides have exchanged their initial PAKE -messages, and the wormhole codes matched), so `verify()` is also a good way -to detect typos or mistakes entering the code. The Deferred will errback with -wormhole.WrongPasswordError if the codes did not match, or it will callback -with the verifier bytes if they did match. - -Once retrieved, you can turn this into hex or Base64 to print it, or render -it as ASCII-art, etc. Once the users are convinced that `verify()` from both -sides are the same, call `send()/get()` to continue the protocol. If you call -`send()/get()` before `verify()`, it will perform the complete protocol -without pausing. - -## Generating the Invitation Code - -In most situations, the "sending" or "initiating" side will call `get_code()` -to generate the invitation code. This returns a string in the form -`NNN-code-words`. The numeric "NNN" prefix is the "channel id", and is a -short integer allocated by talking to the rendezvous server. The rest is a -randomly-generated selection from the PGP wordlist, providing a default of 16 -bits of entropy. The initiating program should display this code to the user, -who should transcribe it to the receiving user, who gives it to the Receiver -object by calling `set_code()`. The receiving program can also use -`input_code()` to use a readline-based input function: this offers tab -completion of allocated channel-ids and known codewords. - -Alternatively, the human users can agree upon an invitation code themselves, -and provide it to both programs later (both sides call `set_code()`). They -should choose a channel-id that is unlikely to already be in use (3 or more -digits are recommended), append a hyphen, and then include randomly-selected -words or characters. Dice, coin flips, shuffled cards, or repeated sampling -of a high-resolution stopwatch are all useful techniques. - -Note that the code is a human-readable string (the python "unicode" type in -python2, "str" in python3). - ## Application Identifier Applications using this library must provide an "application identifier", a @@ -167,18 +90,464 @@ ten Wormholes are active for a given app-id, the connection-id will only need to contain a single digit, even if some other app-id is currently using thousands of concurrent sessions. -## Rendezvous Relays +## Rendezvous Servers -The library depends upon a "rendezvous relay", which is a server (with a +The library depends upon a "rendezvous server", which is a service (on a public IP address) that delivers small encrypted messages from one client to the other. This must be the same for both clients, and is generally baked-in to the application source code or default config. -This library includes the URL of a public relay run by the author. -Application developers can use this one, or they can run their own (see the -`wormhole-server` command and the `src/wormhole/server/` directory) and -configure their clients to use it instead. This URL is passed as a unicode -string. +This library includes the URL of a public rendezvous server run by the +author. Application developers can use this one, or they can run their own +(see the `wormhole-server` command and the `src/wormhole/server/` directory) +and configure their clients to use it instead. This URL is passed as a +unicode string. Note that because the server actually speaks WebSockets, the +URL starts with `ws:` instead of `http:`. + +## Wormhole Parameters + +All wormholes must be created with at least three parameters: + +* `appid`: a (unicode) string +* `relay_url`: a (unicode) string +* `reactor`: the Twisted reactor object + +In addition to these three, the `wormhole.create()` function takes several +optional arguments: + +* `delegate`: provide a Delegate object to enable "delegated mode", or pass + None (the default) to get "deferred mode" +* `journal`: provide a Journal object to enable journaled mode. See + journal.md for details. Note that journals only work with delegated mode, + not with deferred mode. +* `tor_manager`: to enable Tor support, create a `wormhole.TorManager` + instance and pass it here. This will hide the client's IP address by + proxying all connections (rendezvous and transit) through Tor. It also + enables connecting to Onion-service transit hints, and (in the future) will + enable the creation of Onion-services for transit purposes. +* `timing`: this accepts a DebugTiming instance, mostly for internal + diagnostic purposes, to record the transmit/receive timestamps for all + messages. The `wormhole --dump-timing=` feature uses this to build a + JSON-format data bundle, and the `misc/dump-timing.py` tool can build a + scrollable timing diagram from these bundles. +* `welcome_handler`: this is a function that will be called when the + Rendezvous Server's "welcome" message is received. It is used to display + important server messages in an application-specific way. +* `versions`: this can accept a dictionary (JSON-encodable) of data that will + be made available to the peer via the `got_version` event. This data is + delivered before any data messages, and can be used to indicate peer + capabilities. + +## Code Management + +Each wormhole connection is defined by a shared secret "wormhole code". These +codes can be generated offline (by picking a unique number and some secret +words), but are more commonly generated by whoever creates the first +wormhole. In the "bin/wormhole" file-transfer tool, the default behavior is +for the sender to create the code, and for the receiver to type it in. + +The code is a (unicode) string in the form `NNN-code-words`. The numeric +"NNN" prefix is the "channel id" or "nameplate", and is a short integer +allocated by talking to the rendezvous server. The rest is a +randomly-generated selection from the PGP wordlist, providing a default of 16 +bits of entropy. The initiating program should display this code to the user, +who should transcribe it to the receiving user, who gives it to their local +Wormhole object by calling `set_code()`. The receiving program can also use +`input_code()` to use a readline-based input function: this offers tab +completion of allocated channel-ids and known codewords. + +The Wormhole object has three APIs for generating or accepting a code: + +* `w.generate_code(length=2)`: this contacts the Rendezvous Server, allocates + a short numeric nameplate, chooses a configurable number of random words, + then assembles them into the code +* `w.set_code(code)`: this accepts the code as an argument +* `helper = w.input_code()`: this facilitates interactive entry of the code, + with tab-completion. The helper object has methods to return a list of + viable completions for whatever portion of the code has been entered so + far. A convenience wrapper is provided to attach this to the `rlcompleter` + function of libreadline. + +No matter which mode is used, the `w.when_code()` Deferred (or +`delegate.wormhole_got_code(code)` callback) will fire when the code is +known. `when_code` is clearly necessary for `generate_code`, since there's no +other way to learn what code was created, but it may be useful in other modes +for consistency. + +The code-entry Helper object has the following API: + +* `refresh_nameplates()`: requests an updated list of nameplates from the + Rendezvous Server. These form the first portion of the wormhole code (e.g. + "4" in "4-purple-sausages"). Note that they are unicode strings (so "4", + not 4). The Helper will get the response in the background, and calls to + `get_nameplate_completions()` after the response will use the new list. + Calling this after `h.choose_nameplate` will raise + `AlreadyChoseNameplateError`. +* `matches = h.get_nameplate_completions(prefix)`: returns (synchronously) a + set of completions for the given nameplate prefix, along with the hyphen + that always follows the nameplate (and separates the nameplate from the + rest of the code). For example, if the server reports nameplates 1, 12, 13, + 24, and 170 are in use, `get_nameplate_completions("1")` will return + `{"1-", "12-", "13-", "170-"}`. You may want to sort these before + displaying them to the user. Raises `AlreadyChoseNameplateError` if called + after `h.choose_nameplate`. +* `h.choose_nameplate(nameplate)`: accepts a string with the chosen + nameplate. May only be called once, after which + `AlreadyChoseNameplateError` is raised. (in this future, this might + return a Deferred that fires (with None) when the nameplate's wordlist is + known (which happens after the nameplate is claimed, requiring a roundtrip + to the server)). +* `d = h.when_wordlist_is_available()`: return a Deferred that fires (with + None) when the wordlist is known. This can be used to block a readline + frontend which has just called `h.choose_nameplate()` until the resulting + wordlist is known, which can improve the tab-completion behavior. +* `matches = h.get_word_completions(prefix)`: return (synchronously) a set of + completions for the given words prefix. This will include a trailing hyphen + if more words are expected. The possible completions depend upon the + wordlist in use for the previously-claimed nameplate, so calling this + before `choose_nameplate` will raise `MustChooseNameplateFirstError`. + Calling this after `h.choose_words()` will raise `AlreadyChoseWordsError`. + Given a prefix like "su", this returns a set of strings which are potential + matches (e.g. `{"supportive-", "surrender-", "suspicious-"}`. The prefix + should not include the nameplate, but *should* include whatever words and + hyphens have been typed so far (the default wordlist uses alternate lists, + where even numbered words have three syllables, and odd numbered words have + two, so the completions depend upon how many words are present, not just + the partial last word). E.g. `get_word_completions("pr")` will return + `{"processor-", "provincial-", "proximate-"}`, while + `get_word_completions("opulent-pr")` will return `{"opulent-preclude", + "opulent-prefer", "opulent-preshrunk", "opulent-printer", + "opulent-prowler"}` (note the lack of a trailing hyphen, because the + wordlist is expecting a code of length two). If the wordlist is not yet + known, this returns an empty set. All return values will + `.startwith(prefix)`. The frontend is responsible for sorting the results + before display. +* `h.choose_words(words)`: call this when the user is finished typing in the + code. It does not return anything, but will cause the Wormhole's + `w.when_code()` (or corresponding delegate) to fire, and triggers the + wormhole connection process. This accepts a string like "purple-sausages", + without the nameplate. It must be called after `h.choose_nameplate()` or + `MustChooseNameplateFirstError` will be raised. May only be called once, + after which `AlreadyChoseWordsError` is raised. + +The `input_with_completion` wrapper is a function that knows how to use the +code-entry helper to do tab completion of wormhole codes: + +```python +from wormhole import create, input_with_completion +w = create(appid, relay_url, reactor) +input_with_completion("Wormhole code:", w.input_code(), reactor) +d = w.when_code() +``` + +This helper runs python's (raw) `input()` function inside a thread, since +`input()` normally blocks. + +The two machines participating in the wormhole setup are not distinguished: +it doesn't matter which one goes first, and both use the same Wormhole +constructor function. However if `w.generate_code()` is used, only one side +should use it. + +## Offline Codes + +In most situations, the "sending" or "initiating" side will call +`w.generate_code()` and display the resulting code. The sending human reads +it and speaks, types, performs charades, or otherwise transmits the code to +the receiving human. The receiving human then types it into the receiving +computer, where it either calls `w.set_code()` (if the code is passed in via +argv) or `w.input_code()` (for interactive entry). + +Usually one machine generates the code, and a pair of humans transcribes it +to the second machine (so `w.generate_code()` on one side, and `w.set_code()` +or `w.input_code()` on the other). But it is also possible for the humans to +generate the code offline, perhaps at a face-to-face meeting, and then take +the code back to their computers. In this case, `w.set_code()` will be used +on both sides. It is unlikely that the humans will restrict themselves to a +pre-established wordlist when manually generating codes, so the completion +feature of `w.input_code()` is not helpful. + +When the humans create an invitation code out-of-band, they are responsible +for choosing an unused channel-ID (simply picking a random 3-or-more digit +number is probably enough), and some random words. Dice, coin flips, shuffled +cards, or repeated sampling of a high-resolution stopwatch are all useful +techniques. The invitation code uses the same format either way: channel-ID, +a hyphen, and an arbitrary string. There is no need to encode the sampled +random values (e.g. by using the Diceware wordlist) unless that makes it +easier to transcribe: e.g. rolling 6 dice could result in a code like +"913-166532", and flipping 16 coins could result in "123-HTTHHHTTHTTHHTHH". + +## Verifier + +For extra protection against guessing attacks, Wormhole can provide a +"Verifier". This is a moderate-length series of bytes (a SHA256 hash) that is +derived from the supposedly-shared session key. If desired, both sides can +display this value, and the humans can manually compare them before allowing +the rest of the protocol to proceed. If they do not match, then the two +programs are not talking to each other (they may both be talking to a +man-in-the-middle attacker), and the protocol should be abandoned. + +Deferred-mode applications can wait for `d=w.when_verified()`: the Deferred +it returns will fire with the verifier. You can turn this into hex or Base64 +to print it, or render it as ASCII-art, etc. + +Asking the wormhole object for the verifier does not affect the flow of the +protocol. To benefit from verification, applications must refrain from +sending any data (with `w.send(data)`) until after the verifiers are approved +by the user. In addition, applications must queue or otherwise ignore +incoming (received) messages until that point. However once the verifiers are +confirmed, previously-received messages can be considered valid and processed +as usual. + +## Welcome Messages + +The first message sent by the rendezvous server is a "welcome" message (a +dictionary). Clients should not wait for this message, but when it arrives, +they should process the keys it contains. + +The welcome message serves three main purposes: + +* notify users about important server changes, such as CAPTCHA requirements + driven by overload, or donation requests +* enable future protocol negotiation between clients and the server +* advise users of the CLI tools (`wormhole send`) to upgrade to a new version + +There are three keys currently defined for the welcome message, all of which +are optional (the welcome message omits "error" and "motd" unless the server +operator needs to signal a problem). + +* `motd`: if this key is present, it will be a string with embedded newlines. + The client should display this string to the user, including a note that it + comes from the magic-wormhole Rendezvous Server and that server's URL. +* `error`: if present, the server has decided it cannot service this client. + The string will be wrapped in a `WelcomeError` (which is a subclass of + `WormholeError`), and all API calls will signal errors (pending Deferreds + will errback). The rendezvous connection will be closed. +* `current_cli_version`: if present, the server is advising instances of the + CLI tools (the `wormhole` command included in the python distribution) that + there is a newer release available, thus users should upgrade if they can, + because more features will be available if both clients are running the + same version. The CLI tools compare this string against their `__version__` + and can print a short message to stderr if an upgrade is warranted. + +There is currently no facility in the server to actually send `motd`, but a +static `error` string can be included by running the server with +`--signal-error=MESSAGE`. + +The main idea of `error` is to allow the server to cleanly inform the client +about some necessary action it didn't take. The server currently sends the +welcome message as soon as the client connects (even before it receives the +"claim" request), but a future server could wait for a required client +message and signal an error (via the Welcome message) if it didn't see this +extra message before the CLAIM arrived. + +This could enable changes to the protocol, e.g. requiring a CAPTCHA or +proof-of-work token when the server is under DoS attack. The new server would +send the current requirements in an initial message (which old clients would +ignore). New clients would be required to send the token before their "claim" +message. If the server sees "claim" before "token", it knows that the client +is too old to know about this protocol, and it could send a "welcome" with an +`error` field containing instructions (explaining to the user that the server +is under attack, and they must either upgrade to a client that can speak the +new protocol, or wait until the attack has passed). Either case is better +than an opaque exception later when the required message fails to arrive. + +(Note that the server can also send an explicit ERROR message at any time, +and the client should react with a ServerError. Versions 0.9.2 and earlier of +the library did not pay attention to the ERROR message, hence the server +should deliver errors in a WELCOME message if at all possible) + +The `error` field is handled internally by the Wormhole object. The other +fields are processed by an application-supplied "welcome handler" function, +supplied as an argument to the `wormhole()` constructor. This function will +be called with the full welcome dictionary, so any other keys that a future +server might send will be available to it. If the welcome handler raises +`WelcomeError`, the connection will be closed just as if an `error` key had +been received. The handler may be called multiple times (once per connection, +if the rendezvous connection is lost and then reestablished), so applications +should avoid presenting the user with redundant messages. + +The default welcome handler will print `motd` to stderr, and will ignore +`current_cli_version`. + +## Events + +As the wormhole connection is established, several events may be dispatched +to the application. In Delegated mode, these are dispatched by calling +functions on the delegate object. In Deferred mode, the application retrieves +Deferred objects from the wormhole, and event dispatch is performed by firing +those Deferreds. + +* got_code (`yield w.when_code()` / `dg.wormhole_code(code)`): fired when the + wormhole code is established, either after `w.generate_code()` finishes the + generation process, or when the Input Helper returned by `w.input_code()` + has been told `h.set_words()`, or immediately after `w.set_code(code)` is + called. This is most useful after calling `w.generate_code()`, to show the + generated code to the user so they can transcribe it to their peer. +* key (`yield w.when_key()` / `dg.wormhole_key()`): fired when the + key-exchange process has completed and a purported shared key is + established. At this point we do not know that anyone else actually shares + this key: the peer may have used the wrong code, or may have disappeared + altogether. To wait for proof that the key is shared, wait for + `when_verified` instead. This event is really only useful for detecting + that the initiating peer has disconnected after leaving the initial PAKE + message, to display a pacifying message to the user. +* verified (`verifier = yield w.when_verified()` / + `dg.wormhole_verified(verifier)`: fired when the key-exchange process has + completed and a valid VERSION message has arrived. The "verifier" is a byte + string with a hash of the shared session key; clients can compare them + (probably as hex) to ensure that they're really talking to each other, and + not to a man-in-the-middle. When `got_verifier` happens, this side knows + that *someone* has used the correct wormhole code; if someone used the + wrong code, the VERSION message cannot be decrypted, and the wormhole will + be closed instead. +* version (`yield w.when_version()` / `dg.wormhole_version(versions)`: fired + when the VERSION message arrives from the peer. This fires at the same time + as `verified`, but delivers the "app_versions" data (as passed into + `wormhole.create(versions=)`) instead of the verifier string. +* received (`yield w.when_received()` / `dg.wormhole_received(data)`: fired + each time a data message arrives from the peer, with the bytestring that + the peer passed into `w.send(data)`. +* closed (`yield w.close()` / `dg.wormhole_closed(result)`: fired when + `w.close()` has finished shutting down the wormhole, which means all + nameplates and mailboxes have been deallocated, and the WebSocket + connection has been closed. This also fires if an internal error occurs + (specifically WrongPasswordError, which indicates that an invalid encrypted + message was received), which also shuts everything down. The `result` value + is an exception (or Failure) object if the wormhole closed badly, or a + string like "happy" if it had no problems before shutdown. + +## Sending Data + +The main purpose of a Wormhole is to send data. At any point after +construction, callers can invoke `w.send(data)`. This will queue the message +if necessary, but (if all goes well) will eventually result in the peer +getting a `received` event and the data being delivered to the application. + +Since Wormhole provides an ordered record pipe, each call to `w.send` will +result in exactly one `received` event on the far side. Records are not +split, merged, dropped, or reordered. + +Each side can do an arbitrary number of `send()` calls. The Wormhole is not +meant as a long-term communication channel, but some protocols work better if +they can exchange an initial pair of messages (perhaps offering some set of +negotiable capabilities), and then follow up with a second pair (to reveal +the results of the negotiation). The Rendezvous Server does not currently +enforce any particular limits on number of messages, size of messages, or +rate of transmission, but in general clients are expected to send fewer than +a dozen messages, of no more than perhaps 20kB in size (remember that all +these messages are temporarily stored in a SQLite database on the server). A +future version of the protocol may make these limits more explicit, and will +allow clients to ask for greater capacity when they connect (probably by +passing additional "mailbox attribute" parameters with the +`allocate`/`claim`/`open` messages). + +For bulk data transfer, see "transit.md", or the "Dilation" section below. + +## Closing + +When the application is done with the wormhole, it should call `w.close()`, +and wait for a `closed` event. This ensures that all server-side resources +are released (allowing the nameplate to be re-used by some other client), and +all network sockets are shut down. + +In Deferred mode, this just means waiting for the Deferred returned by +`w.close()` to fire. In Delegated mode, this means calling `w.close()` (which +doesn't return anything) and waiting for the delegate's `wormhole_closed()` +method to be called. + +`w.close()` will errback (with some form of `WormholeError`) if anything went +wrong with the process, such as: + +* `WelcomeError`: the server told us to signal an error, probably because the + client is too old understand some new protocol feature +* `ServerError`: the server rejected something we did +* `LonelyError`: we didn't hear from the other side, so no key was + established +* `WrongPasswordError`: we received at least one incorrectly-encrypted + message. This probably indicates that the other side used a different + wormhole code than we did, perhaps because of a typo, or maybe an attacker + tried to guess your code and failed. + +If the wormhole was happy at the time it was closed, the `w.close()` Deferred +will callback (probably with the string "happy", but this may change in the +future). + +## Serialization + +(NOTE: this section is speculative: this code has not yet been written) + +Wormhole objects can be serialized. This can be useful for apps which save +their own state before shutdown, and restore it when they next start up +again. + + +The `w.serialize()` method returns a dictionary which can be JSON encoded +into a unicode string (most applications will probably want to UTF-8 -encode +this into a bytestring before saving on disk somewhere). + +To restore a Wormhole, call `wormhole.from_serialized(data, reactor, +delegate)`. This will return a wormhole in roughly the same state as was +serialized (of course all the network connections will be disconnected). + +Serialization only works for delegated-mode wormholes (since Deferreds point +at functions, which cannot be serialized easily). It also only works for +"non-dilated" wormholes (see below). + +To ensure correct behavior, serialization should probably only be done in +"journaled mode". See journal.md for details. + +If you use serialization, be careful to never use the same partial wormhole +object twice. + +## Dilation + +(NOTE: this section is speculative: this code has not yet been written) + +In the longer term, the Wormhole object will incorporate the "Transit" +functionality (see transit.md) directly, removing the need to instantiate a +second object. A Wormhole can be "dilated" into a form that is suitable for +bulk data transfer. + +All wormholes start out "undilated". In this state, all messages are queued +on the Rendezvous Server for the lifetime of the wormhole, and server-imposed +number/size/rate limits apply. Calling `w.dilate()` initiates the dilation +process, and success is signalled via either `d=w.when_dilated()` firing, or +`dg.wormhole_dilated()` being called. Once dilated, the Wormhole can be used +as an IConsumer/IProducer, and messages will be sent on a direct connection +(if possible) or through the transit relay (if not). + +What's good about a non-dilated wormhole?: + +* setup is faster: no delay while it tries to make a direct connection +* survives temporary network outages, since messages are queued +* works with "journaled mode", allowing progress to be made even when both + sides are never online at the same time, by serializing the wormhole + +What's good about dilated wormholes?: + +* they support bulk data transfer +* you get flow control (backpressure), and IProducer/IConsumer +* throughput is faster: no store-and-forward step + +Use non-dilated wormholes when your application only needs to exchange a +couple of messages, for example to set up public keys or provision access +tokens. Use a dilated wormhole to move large files. + +Dilated wormholes can provide multiple "channels": these are multiplexed +through the single (encrypted) TCP connection. Each channel is a separate +stream (offering IProducer/IConsumer) + +To create a channel, call `c = w.create_channel()` on a dilated wormhole. The +"channel ID" can be obtained with `c.get_id()`. This ID will be a short +(unicode) string, which can be sent to the other side via a normal +`w.send()`, or any other means. On the other side, use `c = +w.open_channel(channel_id)` to get a matching channel object. + +Then use `c.send(data)` and `d=c.when_received()` to exchange data, or wire +them up with `c.registerProducer()`. Note that channels do not close until +the wormhole connection is closed, so they do not have separate `close()` +methods or events. Therefore if you plan to send files through them, you'll +need to inform the recipient ahead of time about how many bytes to expect. ## Bytes, Strings, Unicode, and Python 3 @@ -198,3 +567,20 @@ in python3): * transit connection hints (e.g. "host:port") * application identifier * derived-key "purpose" string: `w.derive_key(PURPOSE, LENGTH)` + +## Full API list + +action | Deferred-Mode | Delegated-Mode +------------------ | -------------------- | -------------- +w.generate_code() | | +w.set_code(code) | | +h=w.input_code() | | +. | d=w.when_code() | dg.wormhole_code(code) +. | d=w.when_verified() | dg.wormhole_verified(verifier) +. | d=w.when_version() | dg.wormhole_version(version) +w.send(data) | | +. | d=w.when_received() | dg.wormhole_received(data) +key=w.derive_key(purpose, length) | | +w.close() | | dg.wormhole_closed(result) +. | d=w.close() | + diff --git a/docs/client-protocol.md b/docs/client-protocol.md new file mode 100644 index 0000000..331ccf6 --- /dev/null +++ b/docs/client-protocol.md @@ -0,0 +1,63 @@ +# Client-to-Client Protocol + +Wormhole clients do not talk directly to each other (at least at first): they +only connect directly to the Rendezvous Server. They ask this server to +convey messages to the other client (via the `add` command and the `message` +response). This document explains the format of these client-to-client +messages. + +Each such message contains a "phase" string, and a hex-encoded binary "body". + +Any phase which is purely numeric (`^\d+$`) is reserved for application data, +and will be delivered in numeric order. All other phases are reserved for the +Wormhole client itself. Clients will ignore any phase they do not recognize. + +Immediately upon opening the mailbox, clients send the `pake` phase, which +contains the binary SPAKE2 message (the one computed as `X+M*pw` or +`Y+N*pw`). + +Upon receiving their peer's `pake` phase, clients compute and remember the +shared key. They derive the "verifier" (a hash of the shared key) and deliver +it to the application by calling `got_verifier`: applications can display +this to users who want additional assurance (by manually comparing the values +from both sides: they ought to be identical). At this point clients also send +the encrypted `version` phase, whose plaintext payload is a UTF-8-encoded +JSON-encoded dictionary of metadata. This allows the two Wormhole instances +to signal their ability to do other things (like "dilate" the wormhole). The +version data will also include an `app_versions` key which contains a +dictionary of metadata provided by the application, allowing apps to perform +similar negotiation. + +At this stage, the client knows the supposed shared key, but has not yet seen +evidence that the peer knows it too. When the first peer message arrives +(i.e. the first message with a `.side` that does not equal our own), it will +be decrypted: we use authenticated encryption (`nacl.SecretBox`), so if this +decryption succeeds, then we're confident that *somebody* used the same +wormhole code as us. This event pushes the client mood from "lonely" to +"happy". + +This might be triggered by the peer's `version` message, but if we had to +re-establish the Rendezvous Server connection, we might get peer messages out +of order and see some application-level message first. + +When a `version` message is successfully decrypted, the application is +signaled with `got_version`. When any application message is successfully +decrypted, `received` is signaled. Application messages are delivered +strictly in-order: if we see phases 3 then 2 then 1, all three will be +delivered in sequence after phase 1 is received. + +If any message cannot be successfully decrypted, the mood is set to "scary", +and the wormhole is closed. All pending Deferreds will be errbacked with a +`WrongPasswordError` (a subclass of `WormholeError`), the nameplate/mailbox +will be released, and the WebSocket connection will be dropped. If the +application calls `close()`, the resulting Deferred will not fire until +deallocation has finished and the WebSocket is closed, and then it will fire +with an errback. + +Both `version` and all numeric (app-specific) phases are encrypted. The +message body will be the hex-encoded output of a NaCl `SecretBox`, keyed by a +phase+side -specific key (computed with HKDF-SHA256, using the shared PAKE +key as the secret input, and `wormhole:phase:%s%s % (SHA256(side), +SHA256(phase))` as the CTXinfo), with a random nonce. + + diff --git a/docs/events.dot b/docs/events.dot deleted file mode 100644 index ca1710d..0000000 --- a/docs/events.dot +++ /dev/null @@ -1,98 +0,0 @@ -digraph { - /*rankdir=LR*/ - api_get_code [label="get_code" shape="hexagon" color="red"] - api_input_code [label="input_code" shape="hexagon" color="red"] - api_set_code [label="set_code" shape="hexagon" color="red"] - verify [label="verify" shape="hexagon" color="red"] - send [label="API\nsend" shape="hexagon" color="red"] - get [label="API\nget" shape="hexagon" color="red"] - close [label="API\nclose" shape="hexagon" color="red"] - - event_connected [label="connected" shape="box"] - event_learned_code [label="learned\ncode" shape="box"] - event_learned_nameplate [label="learned\nnameplate" shape="box"] - event_received_mailbox [label="received\nmailbox" shape="box"] - event_opened_mailbox [label="opened\nmailbox" shape="box"] - event_built_msg1 [label="built\nmsg1" shape="box"] - event_mailbox_used [label="mailbox\nused" shape="box"] - event_learned_PAKE [label="learned\nmsg2" shape="box"] - event_established_key [label="established\nkey" shape="box"] - event_computed_verifier [label="computed\nverifier" shape="box"] - event_received_confirm [label="received\nconfirm" shape="box"] - event_received_message [label="received\nmessage" shape="box"] - event_received_released [label="ack\nreleased" shape="box"] - event_received_closed [label="ack\nclosed" shape="box"] - - event_connected -> api_get_code - event_connected -> api_input_code - api_get_code -> event_learned_code - api_input_code -> event_learned_code - api_set_code -> event_learned_code - - - maybe_build_msg1 [label="build\nmsg1"] - maybe_claim_nameplate [label="claim\nnameplate"] - maybe_send_pake [label="send\npake"] - maybe_send_phase_messages [label="send\nphase\nmessages"] - - event_connected -> maybe_claim_nameplate - event_connected -> maybe_send_pake - - event_built_msg1 -> maybe_send_pake - - event_learned_code -> maybe_build_msg1 - event_learned_code -> event_learned_nameplate - - maybe_build_msg1 -> event_built_msg1 - event_learned_nameplate -> maybe_claim_nameplate - maybe_claim_nameplate -> event_received_mailbox [style="dashed"] - - event_received_mailbox -> event_opened_mailbox - maybe_claim_nameplate -> event_learned_PAKE [style="dashed"] - maybe_claim_nameplate -> event_received_confirm [style="dashed"] - - event_opened_mailbox -> event_learned_PAKE [style="dashed"] - event_learned_PAKE -> event_mailbox_used [style="dashed"] - event_learned_PAKE -> event_received_confirm [style="dashed"] - event_received_confirm -> event_received_message [style="dashed"] - - send -> maybe_send_phase_messages - release_nameplate [label="release\nnameplate"] - event_mailbox_used -> release_nameplate - event_opened_mailbox -> maybe_send_pake - event_opened_mailbox -> maybe_send_phase_messages - - event_learned_PAKE -> event_established_key - event_established_key -> event_computed_verifier - event_established_key -> check_confirmation - event_established_key -> maybe_send_phase_messages - - check_confirmation [label="check\nconfirmation"] - event_received_confirm -> check_confirmation - - notify_verifier [label="notify\nverifier"] - check_confirmation -> notify_verifier - verify -> notify_verifier - event_computed_verifier -> notify_verifier - - check_confirmation -> error - event_received_message -> error - event_received_message -> get - event_established_key -> get - - close -> close_mailbox - close -> release_nameplate - error [label="signal\nerror"] - error -> close_mailbox - error -> release_nameplate - - release_nameplate -> event_received_released [style="dashed"] - close_mailbox [label="close\nmailbox"] - close_mailbox -> event_received_closed [style="dashed"] - - maybe_close_websocket [label="close\nwebsocket"] - event_received_released -> maybe_close_websocket - event_received_closed -> maybe_close_websocket - maybe_close_websocket -> event_websocket_closed [style="dashed"] - event_websocket_closed [label="websocket\nclosed"] -} diff --git a/docs/file-transfer-protocol.md b/docs/file-transfer-protocol.md new file mode 100644 index 0000000..ed26b73 --- /dev/null +++ b/docs/file-transfer-protocol.md @@ -0,0 +1,191 @@ +# File-Transfer Protocol + +The `bin/wormhole` tool uses a Wormhole to establish a connection, then +speaks a file-transfer -specific protocol over that Wormhole to decide how to +transfer the data. This application-layer protocol is described here. + +All application-level messages are dictionaries, which are JSON-encoded and +and UTF-8 encoded before being handed to `wormhole.send` (which then encrypts +them before sending through the rendezvous server to the peer). + +## Sender + +`wormhole send` has two main modes: file/directory (which requires a +non-wormhole Transit connection), or text (which does not). + +If the sender is doing files or directories, its first message contains just +a `transit` key, whose value is a dictionary with `abilities-v1` and +`hints-v1` keys. These are given to the Transit object, described below. + +Then (for both files/directories and text) it sends a message with an `offer` +key. The offer contains a single key, exactly one of (`message`, `file`, or +`directory`). For `message`, the value is the message being sent. For `file` +and `directory`, it contains a dictionary with additional information: + +* `message`: the text message, for text-mode +* `file`: for file-mode, a dict with `filename` and `filesize` +* `directory`: for directory-mode, a dict with: + * `mode`: the compression mode, currently always `zipfile/deflated` + * `dirname` + * `zipsize`: integer, size of the transmitted data in bytes + * `numbytes`: integer, estimated total size of the uncompressed directory + * `numfiles`: integer, number of files+directories being sent + +The sender runs a loop where it waits for similar dictionary-shaped messages +from the recipient, and processes them. It reacts to the following keys: + +* `error`: use the value to throw a TransferError and terminates +* `transit`: use the value to build the Transit instance +* `answer`: + * if `message_ack: ok` is in the value (we're in text-mode), then exit with success + * if `file_ack: ok` in the value (and we're in file/directory mode), then + wait for Transit to connect, then send the file through Transit, then wait + for an ack (via Transit), then exit + +The sender can handle all of these keys in the same message, or spaced out +over multiple ones. It will ignore any keys it doesn't recognize, and will +completely ignore messages that don't contain any recognized key. The only +constraint is that the message containing `message_ack` or `file_ack` is the +last one: it will stop looking for wormhole messages at that point. + +## Recipient + +`wormhole receive` is used for both file/directory-mode and text-mode: it +learns which is being used from the `offer` message. + +The recipient enters a loop where it processes the following keys from each +received message: + +* `error`: if present in any message, the recipient raises TransferError +(with the value) and exits immediately (before processing any other keys) +* `transit`: the value is used to build the Transit instance +* `offer`: parse the offer: + * `message`: accept the message and terminate + * `file`: connect a Transit instance, wait for it to deliver the indicated + number of bytes, then write them to the target filename + * `directory`: as with `file`, but unzip the bytes into the target directory + +## Transit + +The Wormhole API does not currently provide for large-volume data transfer +(this feature will be added to a future version, under the name "Dilated +Wormhole"). For now, bulk data is sent through a "Transit" object, which does +not use the Rendezvous Server. Instead, it tries to establish a direct TCP +connection from sender to recipient (or vice versa). If that fails, both +sides connect to a "Transit Relay", a very simple Server that just glues two +TCP sockets together when asked. + +The Transit object is created with a key (the same key on each side), and all +data sent through it will be encrypted with a derivation of that key. The +transit key is also used to derive handshake messages which are used to make +sure we're talking to the right peer, and to help the Transit Relay match up +the two client connections. Unlike Wormhole objects (which are symmetric), +Transit objects come in pairs: one side is the Sender, and the other is the +Receiver. + +Like Wormhole, Transit provides an encrypted record pipe. If you call +`.send()` with 40 bytes, the other end will see a `.gotData()` with exactly +40 bytes: no splitting, merging, dropping, or re-ordering. The Transit object +also functions as a twisted Producer/Consumer, so it can be connected +directly to file-readers and writers, and does flow-control properly. + +Most of the complexity of the Transit object has to do with negotiating and +scheduling likely targets for the TCP connection. + +Each Transit object has a set of "abilities". These are outbound connection +mechanisms that the client is capable of using. The basic CLI tool (running +on a normal computer) has two abilities: `direct-tcp-v1` and `relay-v1`. + +* `direct-tcp-v1` indicates that it can make outbound TCP connections to a + requested host and port number. "v1" means that the first thing sent over + these connections is a specific derived handshake message, e.g. `transit + sender HEXHEX ready\n\n`. +* `relay-v1` indicates it can connect to the Transit Relay and speak the + matching protocol (in which the first message is `please relay HEXHEX for + side HEX\n`, and the relay might eventually say `ok\n`). + +Future implementations may have additional abilities, such as connecting +directly to Tor onion services, I2P services, WebSockets, WebRTC, or other +connection technologies. Implementations on some platforms (such as web +browsers) may lack `direct-tcp-v1` or `relay-v1`. + +While it isn't strictly necessary for both sides to emit what they're capable +of using, it does help performance: a Tor Onion-service -capable receiver +shouldn't spend the time and energy to set up an onion service if the sender +can't use it. + +After learning the abilities of its peer, the Transit object can create a +list of "hints", which are endpoints that the peer should try to connect to. +Each hint will fall under one of the abilities that the peer indicated it +could use. Hints have types like `direct-tcp-v1`, `tor-tcp-v1`, and +`relay-v1`. Hints are encoded into dictionaries (with a mandatory `type` key, +and other keys as necessary): + +* `direct-tcp-v1` {hostname:, port:, priority:?} +* `tor-tcp-v1` {hostname:, port:, priority:?} +* `relay-v1` {hints: [{hostname:, port:, priority:?}, ..]} + +For example, if our peer can use `direct-tcp-v1`, then our Transit object +will deduce our local IP addresses (unless forbidden, i.e. we're using Tor), +listen on a TCP port, then send a list of `direct-tcp-v1` hints pointing at +all of them. If our peer can use `relay-v1`, then we'll connect to our relay +server and give the peer a hint to the same. + +`tor-tcp-v1` hints indicate an Onion service, which cannot be reached without +Tor. `direct-tcp-v1` hints can be reached with direct TCP connections (unless +forbidden) or by proxying through Tor. Onion services take about 30 seconds +to spin up, but bypass NAT, allowing two clients behind NAT boxes to connect +without a transit relay (really, the entire Tor network is acting as a +relay). + +The file-transfer application uses `transit` messages to convey these +abilities and hints from one Transit object to the other. After updating the +Transit objects, it then asks the Transit object to connect, whereupon +Transit will try to connect to all the hints that it can, and will use the +first one that succeeds. + +The file-transfer application, when actually sending file/directory data, +will close the Wormhole as soon as it has enough information to begin opening +the Transit connection. The final ack of the received data is sent through +the Transit object, as a UTF-8-encoded JSON-encoded dictionary with `ack: ok` +and `sha256: HEXHEX` containing the hash of the received data. + + +## Future Extensions + +Transit will be extended to provide other connection techniques: + +* WebSocket: usable by web browsers, not too hard to use by normal computers, + requires direct (or relayed) TCP connection +* WebRTC: usable by web browsers, hard-but-technically-possible to use by + normal computers, provides NAT hole-punching for "free" +* (web browsers cannot make direct TCP connections, so interop between + browsers and CLI clients will either require adding WebSocket to CLI, or a + relay that is capable of speaking/bridging both) +* I2P: like Tor, but not capable of proxying to normal TCP hints. +* ICE-mediated STUN/STUNT: NAT hole-punching, assisted somewhat by a server + that can tell you your external IP address and port. Maybe implemented as a + uTP stream (which is UDP based, and thus easier to get through NAT). + +The file-transfer protocol will be extended too: + +* "command mode": establish the connection, *then* figure out what we want to + use it for, allowing multiple files to be exchanged, in either direction. + This is to support a GUI that lets you open the wormhole, then drop files + into it on either end. +* some Transit messages being sent early, so ports and Onion services can be + spun up earier, to reduce overall waiting time +* transit messages being sent in multiple phases: maybe the transit + connection can progress while waiting for the user to confirm the transfer + +The hope is that by sending everything in dictionaries and multiple messages, +there will be enough wiggle room to make these extensions in a +backwards-compatible way. For example, to add "command mode" while allowing +the fancy new (as yet unwritten) GUI client to interoperate with +old-fashioned one-file-only CLI clients, we need the GUI tool to send an "I'm +capable of command mode" in the VERSION message, and look for it in the +received VERSION. If it isn't present, it will either expect to see an offer +(if the other side is sending), or nothing (if it is waiting to receive), and +can explain the situation to the user accordingly. It might show a locked set +of bars over the wormhole graphic to mean "cannot send", or a "waiting to +send them a file" overlay for send-only. diff --git a/docs/introduction.md b/docs/introduction.md new file mode 100644 index 0000000..7e2c255 --- /dev/null +++ b/docs/introduction.md @@ -0,0 +1,56 @@ +# Magic-Wormhole + +The magic-wormhole (Python) distribution provides several things: an +executable tool ("bin/wormhole"), an importable library (`import wormhole`), +the URL of a publically-available Rendezvous Server, and the definition of a +protocol used by all three. + +The executable tool provides basic sending and receiving of files, +directories, and short text strings. These all use `wormhole send` and +`wormhole receive` (which can be abbreviated as `wormhole tx` and `wormhole +rx`). It also has a mode to facilitate the transfer of SSH keys. This tool, +while useful on its own, is just one possible use of the protocol. + +The `wormhole` library provides an API to establish a bidirectional ordered +encrypted record pipe to another instance (where each record is an +arbitrary-sized bytestring). This does not provide file-transfer directly: +the "bin/wormhole" tool speaks a simple protocol through this record pipe to +negotiate and perform the file transfer. + +`wormhole/cli/public_relay.py` contains the URLs of a Rendezvous Server and a +Transit Relay which I provide to support the file-transfer tools, which other +developers should feel free to use for their applications as well. I cannot +make any guarantees about performance or uptime for these servers: if you +want to use Magic Wormhole in a production environment, please consider +running a server on your own infrastructure (just run `wormhole-server start` +and modify the URLs in your application to point at it). + +## The Magic-Wormhole Protocol + +There are several layers to the protocol. + +At the bottom level, each client opens a WebSocket to the Rendezvous Server, +sending JSON-based commands to the server, and receiving similarly-encoded +messages. Some of these commands are addressed to the server itself, while +others are instructions to queue a message to other clients, or are +indications of messages coming from other clients. All these messages are +described in "server-protocol.md". + +These inter-client messages are used to convey the PAKE protocol exchange, +then a "VERSION" message (which doubles to verify the session key), then some +number of encrypted application-level data messages. "client-protocol.md" +describes these wormhole-to-wormhole messages. + +Each wormhole-using application is then free to interpret the data messages +as it pleases. The file-transfer app sends an "offer" from the `wormhole +send` side, to which the `wormhole receive` side sends a response, after +which the Transit connection is negotiated (if necessary), and finally the +data is sent through the Transit connection. "file-transfer-protocol.md" +describes this application's use of the client messages. + +## The `wormhole` API + +Application use the `wormhole` library to establish wormhole connections and +exchange data through them. Please see `api.md` for a complete description of +this interface. + diff --git a/docs/journal.md b/docs/journal.md new file mode 100644 index 0000000..072c01f --- /dev/null +++ b/docs/journal.md @@ -0,0 +1,148 @@ +# Journaled Mode + +(note: this section is speculative, the code has not yet been written) + +Magic-Wormhole supports applications which are written in a "journaled" or +"checkpointed" style. These apps store their entire state in a well-defined +checkpoint (perhaps in a database), and react to inbound events or messages +by carefully moving from one state to another, then releasing any outbound +messages. As a result, they can be terminated safely at any moment, without +warning, and ensure that the externally-visible behavior is deterministic and +independent of this stop/restart timing. + +This is the style encouraged by the E event loop, the +original [Waterken Server](http://waterken.sourceforge.net/), and the more +modern [Ken Platform](http://web.eecs.umich.edu/~tpkelly/Ken/), all +influencial in the object-capability security community. + +## Requirements + +Applications written in this style must follow some strict rules: + +* all state goes into the checkpoint +* the only way to affect the state is by processing an input message +* event processing is deterministic (any non-determinism must be implemented + as a message, e.g. from a clock service or a random-number generator) +* apps must never forget a message for which they've accepted reponsibility + +The main processing function takes the previous state checkpoint and a single +input message, and produces a new state checkpoint and a set of output +messages. For performance, the state might be kept in memory between events, +but the behavior should be indistinguishable from that of a server which +terminates completely between events. + +In general, applications must tolerate duplicate inbound messages, and should +re-send outbound messages until the recipient acknowledges them. Any outbound +responses to an inbound message must be queued until the checkpoint is +recorded. If outbound messages were delivered before the checkpointing, then +a crash just after delivery would roll the process back to a state where it +forgot about the inbound event, causing observably inconsistent behavior that +depends upon whether the outbound message successfully escaped the dying +process or not. + +As a result, journaled-style applications use a very specific process when +interacting with the outside world. Their event-processing function looks +like: + +* receive inbound event +* (load state) +* create queue for any outbound messages +* process message (changing state and queuing outbound messages) +* serialize state, record in checkpoint +* deliver any queued outbound messages + +In addition, the protocols used to exchange messages should include message +IDs and acks. Part of the state vector will include a set of unacknowledged +outbound messages. When a connection is established, all outbound messages +should be re-sent, and messages are removed from the pending set when an +inbound ack is received. The state must include a set of inbound message ids +which have been processed already. All inbound messages receive an ack, but +only new ones are processed. Connection establishment/loss is not strictly +included in the journaled-app model (in Waterken/Ken, message delivery is +provided by the platform, and apps do not know about connections), but +general: + +* "I want to have a connection" is stored in the state vector +* "I am connected" is not +* when a connection is established, code can run to deliver pending messages, + and this does not qualify as an inbound event +* inbound events can only happen when at least one connection is established +* immediately after restarting from a checkpoint, no connections are + established, but the app might initiate outbound connections, or prepare to + accept inbound ones + +## Wormhole Support + +To support this mode, the Wormhole constructor accepts a `journal=` argument. +If provided, it must be an object that implements the `wormhole.IJournal` +interface, which consists of two methods: + +* `j.queue_outbound(fn, *args, **kwargs)`: used to delay delivery of outbound + messages until the checkpoint has been recorded +* `j.process()`: a context manager which should be entered before processing + inbound messages + +`wormhole.Journal` is an implementation of this interface, which is +constructed with a (synchronous) `save_checkpoint` function. Applications can +use it, or bring their own. + +The Wormhole object, when configured with a journal, will wrap all inbound +WebSocket message processing with the `j.process()` context manager, and will +deliver all outbound messages through `j.queue_outbound`. Applications using +such a Wormhole must also use the same journal for their own (non-wormhole) +events. It is important to coordinate multiple sources of events: e.g. a UI +event may cause the application to call `w.send(data)`, and the outbound +wormhole message should be checkpointed along with the app's state changes +caused by the UI event. Using a shared journal for both wormhole- and +non-wormhole- events provides this coordination. + +The `save_checkpoint` function should serialize application state along with +any Wormholes that are active. Wormhole state can be obtained by calling +`w.serialize()`, which will return a dictionary (that can be +JSON-serialized). At application startup (or checkpoint resumption), +Wormholes can be regenerated with `wormhole.from_serialized()`. Note that +only "delegated-mode" wormholes can be serialized: Deferreds are not amenable +to usage beyond a single process lifetime. + +For a functioning example of a journaled-mode application, see +misc/demo-journal.py. The following snippet may help illustrate the concepts: + +```python +class App: + @classmethod + def new(klass): + self = klass() + self.state = {} + self.j = wormhole.Journal(self.save_checkpoint) + self.w = wormhole.create(.., delegate=self, journal=self.j) + + @classmethod + def from_serialized(klass): + self = klass() + self.j = wormhole.Journal(self.save_checkpoint) + with open("state.json", "r") as f: + data = json.load(f) + self.state = data["state"] + self.w = wormhole.from_serialized(data["wormhole"], reactor, + delegate=self, journal=self.j) + + def inbound_event(self, event): + # non-wormhole events must be performed in the journal context + with self.j.process(): + parse_event(event) + change_state() + self.j.queue_outbound(self.send, outbound_message) + + def wormhole_received(self, data): + # wormhole events are already performed in the journal context + change_state() + self.j.queue_outbound(self.send, stuff) + + def send(self, outbound_message): + actually_send_message(outbound_message) + + def save_checkpoint(self): + app_state = {"state": self.state, "wormhole": self.w.serialize()} + with open("state.json", "w") as f: + json.dump(app_state, f) +``` diff --git a/docs/server-protocol.md b/docs/server-protocol.md new file mode 100644 index 0000000..3afb967 --- /dev/null +++ b/docs/server-protocol.md @@ -0,0 +1,237 @@ +# Rendezvous Server Protocol + +## Concepts + +The Rendezvous Server provides queued delivery of binary messages from one +client to a second, and vice versa. Each message contains a "phase" (a +string) and a body (bytestring). These messages are queued in a "Mailbox" +until the other side connects and retrieves them, but are delivered +immediately if both sides are connected to the server at the same time. + +Mailboxes are identified by a large random string. "Nameplates", in contrast, +have short numeric identities: in a wormhole code like "4-purple-sausages", +the "4" is the nameplate. + +Each client has a randomly-generated "side", a short hex string, used to +differentiate between echoes of a client's own message, and real messages +from the other client. + +## Application IDs + +The server isolates each application from the others. Each client provides an +"App Id" when it first connects (via the "BIND" message), and all subsequent +commands are scoped to this application. This means that nameplates +(described below) and mailboxes can be re-used between different apps. The +AppID is a unicode string. Both sides of the wormhole must use the same +AppID, of course, or they'll never see each other. The server keeps track of +which applications are in use for maintenance purposes. + +Each application should use a unique AppID. Developers are encouraged to use +"DNSNAME/APPNAME" to obtain a unique one: e.g. the `bin/wormhole` +file-transfer tool uses `lothar.com/wormhole/text-or-file-xfer`. + +## WebSocket Transport + +At the lowest level, each client establishes (and maintains) a WebSocket +connection to the Rendezvous Server. If the connection is lost (which could +happen because the server was rebooted for maintenance, or because the +client's network connection migrated from one network to another, or because +the resident network gremlins decided to mess with you today), clients should +reconnect after waiting a random (and exponentially-growing) delay. The +Python implementation waits about 1 second after the first connection loss, +growing by 50% each time, capped at 1 minute. + +Each message to the server is a dictionary, with at least a `type` key, and +other keys that depend upon the particular message type. Messages from server +to client follow the same format. + +`misc/dump-timing.py` is a debug tool which renders timing data gathered from +the server and both clients, to identify protocol slowdowns and guide +optimization efforts. To support this, the client/server messages include +additional keys. Client->Server messages include a random `id` key, which is +copied into the `ack` that is immediately sent back to the client for all +commands (logged for the timing tool but otherwise ignored). Some +client->server messages (`list`, `allocate`, `claim`, `release`, `close`, +`ping`) provoke a direct response by the server: for these, `id` is copied +into the response. This helps the tool correlate the command and response. +All server->client messages have a `server_tx` timestamp (seconds since +epoch, as a float), which records when the message left the server. Direct +responses include a `server_rx` timestamp, to record when the client's +command was received. The tool combines these with local timestamps (recorded +by the client and not shared with the server) to build a full picture of +network delays and round-trip times. + +All messages are serialized as JSON, encoded to UTF-8, and the resulting +bytes sent as a single "binary-mode" WebSocket payload. + +Servers can signal `error` for any message type it does not recognize. +Clients and Servers must ignore unrecognized keys in otherwise-recognized +messages. Clients must ignore unrecognized message types from the Server. + +## Connection-Specific (Client-to-Server) Messages + +The first thing each client sends to the server, immediately after the +WebSocket connection is established, is a `bind` message. This specifies the +AppID and side (in keys `appid` and `side`, respectively) that all subsequent +messages will be scoped to. While technically each message could be +independent (with its own `appid` and `side`), I thought it would be less +confusing to use exactly one WebSocket per logical wormhole connection. + +The first thing the server sends to each client is the `welcome` message. +This is intended to deliver important status information to the client that +might influence its operation. The Python client currently reacts to the +following keys (and ignores all others): + +* `current_cli_version`: prompts the user to upgrade if the server's + advertised version is greater than the client's version (as derived from + the git tag) +* `motd`: prints this message, if present; intended to inform users about + performance problems, scheduled downtime, or to beg for donations to keep + the server running +* `error`: causes the client to print the message and then terminate. If a + future version of the protocol requires a rate-limiting CAPTCHA ticket or + other authorization record, the server can send `error` (explaining the + requirement) if it does not see this ticket arrive before the `bind`. + +A `ping` will provoke a `pong`: these are only used by unit tests for +synchronization purposes (to detect when a batch of messages have been fully +processed by the server). NAT-binding refresh messages are handled by the +WebSocket layer (by asking Autobahn to send a keepalive messages every 60 +seconds), and do not use `ping`. + +If any client->server command is invalid (e.g. it lacks a necessary key, or +was sent in the wrong order), an `error` response will be sent, This response +will include the error string in the `error` key, and a full copy of the +original message dictionary in `orig`. + +## Nameplates + +Wormhole codes look like `4-purple-sausages`, consisting of a number followed +by some random words. This number is called a "Nameplate". + +On the Rendezvous Server, the Nameplate contains a pointer to a Mailbox. +Clients can "claim" a nameplate, and then later "release" it. Each claim is +for a specific side (so one client claiming the same nameplate multiple times +only counts as one claim). Nameplates are deleted once the last client has +released it, or after some period of inactivity. + +Clients can either make up nameplates themselves, or (more commonly) ask the +server to allocate one for them. Allocating a nameplate automatically claims +it (to avoid a race condition), but for simplicity, clients send a claim for +all nameplates, even ones which they've allocated themselves. + +Nameplates (on the server) must live until the second client has learned +about the associated mailbox, after which point they can be reused by other +clients. So if two clients connect quickly, but then maintain a long-lived +wormhole connection, the do not need to consume the limited space of short +nameplates for that whole time. + +The `allocate` command allocates a nameplate (the server returns one that is +as short as possible), and the `allocated` response provides the answer. +Clients can also send a `list` command to get back a `nameplates` response +with all allocated nameplates for the bound AppID: this helps the code-input +tab-completion feature know which prefixes to offer. The `nameplates` +response returns a list of dictionaries, one per claimed nameplate, with at +least an `id` key in each one (with the nameplate string). Future versions +may record additional attributes in the nameplate records, specifically a +wordlist identifier and a code length (again to help with code-completion on +the receiver). + +## Mailboxes + +The server provides a single "Mailbox" to each pair of connecting Wormhole +clients. This holds an unordered set of messages, delivered immediately to +connected clients, and queued for delivery to clients which connect later. +Messages from both clients are merged together: clients use the included +`side` identifier to distinguish echoes of their own messages from those +coming from the other client. + +Each mailbox is "opened" by some number of clients at a time, until all +clients have closed it. Mailboxes are kept alive by either an open client, or +a Nameplate which points to the mailbox (so when a Nameplate is deleted from +inactivity, the corresponding Mailbox will be too). + +The `open` command both marks the mailbox as being opened by the bound side, +and also adds the WebSocket as subscribed to that mailbox, so new messages +are delivered immediately to the connected client. There is no explicit ack +to the `open` command, but since all clients add a message to the mailbox as +soon as they connect, there will always be a `message` reponse shortly after +the `open` goes through. The `close` command provokes a `closed` response. + +The `close` command accepts an optional "mood" string: this allows clients to +tell the server (in general terms) about their experiences with the wormhole +interaction. The server records the mood in its "usage" record, so the server +operator can get a sense of how many connections are succeeding and failing. +The moods currently recognized by the Rendezvous Server are: + +* `happy` (default): the PAKE key-establishment worked, and the client saw at + least one valid encrypted message from its peer +* `lonely`: the client gave up without hearing anything from its peer +* `scary`: the client saw an invalid encrypted message from its peer, + indicating that either the wormhole code was typed in wrong, or an attacker + tried (and failed) to guess the code +* `errory`: the client encountered some other error: protocol problem or + internal error + +The server will also record `pruney` if it deleted the mailbox due to +inactivity, or `crowded` if more than two sides tried to access the mailbox. + +When clients use the `add` command to add a client-to-client message, they +will put the body (a bytestring) into the command as a hex-encoded string in +the `body` key. They will also put the message's "phase", as a string, into +the `phase` key. See client-protocol.md for details about how different +phases are used. + +When a client sends `open`, it will get back a `message` response for every +message in the mailbox. It will also get a real-time `message` for every +`add` performed by clients later. These `message` responses include "side" +and "phase" from the sending client, and "body" (as a hex string, encoding +the binary message body). The decoded "body" will either by a random-looking +cryptographic value (for the PAKE message), or a random-looking encrypted +blob (for the VERSION message, as well as all application-provided payloads). +The `message` response will also include `id`, copied from the `id` of the +`add` message (and used only by the timing-diagram tool). + +The Rendezvous Server does not de-duplicate messages, nor does it retain +ordering: clients must do both if they need to. + +## All Message Types + +This lists all message types, along with the type-specific keys for each (if +any), and which ones provoke direct responses: + +* S->C welcome {welcome:} +* (C->S) bind {appid:, side:} +* (C->S) list {} -> nameplates +* S->C nameplates {nameplates: [{id: str},..]} +* (C->S) allocate {} -> allocated +* S->C allocated {nameplate:} +* (C->S) claim {nameplate:} -> claimed +* S->C claimed {mailbox:} +* (C->S) release {nameplate:?} -> released +* S->C released +* (C->S) open {mailbox:} +* (C->S) add {phase: str, body: hex} -> message (to all connected clients) +* S->C message {side:, phase:, body:, id:} +* (C->S) close {mailbox:?, mood:?} -> closed +* S->C closed +* S->C ack +* (C->S) ping {ping: int} -> ping +* S->C pong {pong: int} +* S->C error {error: str, orig:} + +# Persistence + +The server stores all messages in a database, so it should not lose any +information when it is restarted. The server will not send a direct +response until any side-effects (such as the message being added to the +mailbox) have been safely committed to the database. + +The client library knows how to resume the protocol after a reconnection +event, assuming the client process itself continues to run. + +Clients which terminate entirely between messages (e.g. a secure chat +application, which requires multiple wormhole messages to exchange +address-book entries, and which must function even if the two apps are never +both running at the same time) can use "Journal Mode" to ensure forward +progress is made: see "journal.md" for details. diff --git a/docs/state-machines/Makefile b/docs/state-machines/Makefile new file mode 100644 index 0000000..b64cf90 --- /dev/null +++ b/docs/state-machines/Makefile @@ -0,0 +1,9 @@ + +default: images + +images: boss.png code.png key.png machines.png mailbox.png nameplate.png lister.png order.png receive.png send.png terminator.png + +.PHONY: default images + +%.png: %.dot + dot -T png $< >$@ diff --git a/docs/state-machines/_connection.dot b/docs/state-machines/_connection.dot new file mode 100644 index 0000000..3101f18 --- /dev/null +++ b/docs/state-machines/_connection.dot @@ -0,0 +1,76 @@ +digraph { + /* note: this is nominally what we want from the machine that + establishes the WebSocket connection (and re-establishes it when it + is lost). We aren't using this yet; for now we're relying upon + twisted.application.internet.ClientService, which does reconnection + and random exponential backoff. + + The one thing it doesn't do is fail entirely when the first + connection attempt fails, which I think would be good for usability. + If the first attempt fails, it's probably because you don't have a + network connection, or the hostname is wrong, or the service has + been retired entirely. And retrying silently forever is not being + honest with the user. + + So I'm keeping this diagram around, as a reminder of how we'd like + to modify ClientService. */ + + + /* ConnectionMachine */ + C_start [label="Connection\nMachine" style="dotted"] + C_start -> C_Pc1 [label="CM_start()" color="orange" fontcolor="orange"] + C_Pc1 [shape="box" label="ep.connect()" color="orange"] + C_Pc1 -> C_Sc1 [color="orange"] + C_Sc1 [label="connecting\n(1st time)" color="orange"] + C_Sc1 -> C_P_reset [label="d.callback" color="orange" fontcolor="orange"] + C_P_reset [shape="box" label="reset\ntimer" color="orange"] + C_P_reset -> C_S_negotiating [color="orange"] + C_Sc1 -> C_P_failed [label="d.errback" color="red"] + C_Sc1 -> C_P_failed [label="p.onClose" color="red"] + C_Sc1 -> C_P_cancel [label="C_stop()"] + C_P_cancel [shape="box" label="d.cancel()"] + C_P_cancel -> C_S_cancelling + C_S_cancelling [label="cancelling"] + C_S_cancelling -> C_P_stopped [label="d.errback"] + + C_S_negotiating [label="negotiating" color="orange"] + C_S_negotiating -> C_P_failed [label="p.onClose"] + C_S_negotiating -> C_P_connected [label="p.onOpen" color="orange" fontcolor="orange"] + C_S_negotiating -> C_P_drop2 [label="C_stop()"] + C_P_drop2 [shape="box" label="p.dropConnection()"] + C_P_drop2 -> C_S_disconnecting + C_P_connected [shape="box" label="tx bind\nM_connected()" color="orange"] + C_P_connected -> C_S_open [color="orange"] + + C_S_open [label="open" color="green"] + C_S_open -> C_P_lost [label="p.onClose" color="blue" fontcolor="blue"] + C_S_open -> C_P_drop [label="C_stop()" color="orange" fontcolor="orange"] + C_P_drop [shape="box" label="p.dropConnection()\nM_lost()" color="orange"] + C_P_drop -> C_S_disconnecting [color="orange"] + C_S_disconnecting [label="disconnecting" color="orange"] + C_S_disconnecting -> C_P_stopped [label="p.onClose" color="orange" fontcolor="orange"] + + C_P_lost [shape="box" label="M_lost()" color="blue"] + C_P_lost -> C_P_wait [color="blue"] + C_P_wait [shape="box" label="start timer" color="blue"] + C_P_wait -> C_S_waiting [color="blue"] + C_S_waiting [label="waiting" color="blue"] + C_S_waiting -> C_Pc2 [label="expire" color="blue" fontcolor="blue"] + C_S_waiting -> C_P_stop_timer [label="C_stop()"] + C_P_stop_timer [shape="box" label="timer.cancel()"] + C_P_stop_timer -> C_P_stopped + C_Pc2 [shape="box" label="ep.connect()" color="blue"] + C_Pc2 -> C_Sc2 [color="blue"] + C_Sc2 [label="reconnecting" color="blue"] + C_Sc2 -> C_P_reset [label="d.callback" color="blue" fontcolor="blue"] + C_Sc2 -> C_P_wait [label="d.errback"] + C_Sc2 -> C_P_cancel [label="C_stop()"] + + C_P_stopped [shape="box" label="MC_stopped()" color="orange"] + C_P_stopped -> C_S_stopped [color="orange"] + C_S_stopped [label="stopped" color="orange"] + + C_P_failed [shape="box" label="notify_fail" color="red"] + C_P_failed -> C_S_failed + C_S_failed [label="failed" color="red"] +} diff --git a/docs/state-machines/allocator.dot b/docs/state-machines/allocator.dot new file mode 100644 index 0000000..2e2e280 --- /dev/null +++ b/docs/state-machines/allocator.dot @@ -0,0 +1,29 @@ +digraph { + + start [label="A:\nNameplate\nAllocation" style="dotted"] + {rank=same; start S0A S0B} + start -> S0A [style="invis"] + S0A [label="S0A:\nidle\ndisconnected" color="orange"] + S0A -> S0B [label="connected"] + S0B -> S0A [label="lost"] + S0B [label="S0B:\nidle\nconnected"] + S0A -> S1A [label="allocate(length, wordlist)" color="orange"] + S0B -> P_allocate [label="allocate(length, wordlist)"] + P_allocate [shape="box" label="RC.tx_allocate" color="orange"] + P_allocate -> S1B [color="orange"] + {rank=same; S1A P_allocate S1B} + S0B -> S1B [style="invis"] + S1B [label="S1B:\nallocating\nconnected" color="orange"] + S1B -> foo [label="lost"] + foo [style="dotted" label=""] + foo -> S1A + S1A [label="S1A:\nallocating\ndisconnected" color="orange"] + S1A -> P_allocate [label="connected" color="orange"] + + S1B -> P_allocated [label="rx_allocated" color="orange"] + P_allocated [shape="box" label="choose words\nC.allocated(nameplate,code)" color="orange"] + P_allocated -> S2 [color="orange"] + + S2 [label="S2:\ndone" color="orange"] + +} diff --git a/docs/state-machines/boss.dot b/docs/state-machines/boss.dot new file mode 100644 index 0000000..8e32899 --- /dev/null +++ b/docs/state-machines/boss.dot @@ -0,0 +1,80 @@ +digraph { + + /* could shave a RTT by committing to the nameplate early, before + finishing the rest of the code input. While the user is still + typing/completing the code, we claim the nameplate, open the mailbox, + and retrieve the peer's PAKE message. Then as soon as the user + finishes entering the code, we build our own PAKE message, send PAKE, + compute the key, send VERSION. Starting from the Return, this saves + two round trips. OTOH it adds consequences to hitting Tab. */ + + start [label="Boss\n(manager)" style="dotted"] + + {rank=same; P0_code S0} + P0_code [shape="box" style="dashed" + label="C.input_code\n or C.allocate_code\n or C.set_code"] + P0_code -> S0 + S0 [label="S0: empty"] + S0 -> P0_build [label="got_code"] + + S0 -> P_close_error [label="rx_error"] + P_close_error [shape="box" label="T.close(errory)"] + P_close_error -> S_closing + S0 -> P_close_lonely [label="close"] + + S0 -> P_close_unwelcome [label="rx_unwelcome"] + P_close_unwelcome [shape="box" label="T.close(unwelcome)"] + P_close_unwelcome -> S_closing + + P0_build [shape="box" label="W.got_code"] + P0_build -> S1 + S1 [label="S1: lonely" color="orange"] + + S1 -> S2 [label="happy"] + + S1 -> P_close_error [label="rx_error"] + S1 -> P_close_scary [label="scared" color="red"] + S1 -> P_close_unwelcome [label="rx_unwelcome"] + S1 -> P_close_lonely [label="close"] + P_close_lonely [shape="box" label="T.close(lonely)"] + P_close_lonely -> S_closing + + P_close_scary [shape="box" label="T.close(scary)" color="red"] + P_close_scary -> S_closing [color="red"] + + S2 [label="S2: happy" color="green"] + S2 -> P2_close [label="close"] + P2_close [shape="box" label="T.close(happy)"] + P2_close -> S_closing + + S2 -> P2_got_phase [label="got_phase"] + P2_got_phase [shape="box" label="W.received"] + P2_got_phase -> S2 + + S2 -> P2_got_version [label="got_version"] + P2_got_version [shape="box" label="W.got_version"] + P2_got_version -> S2 + + S2 -> P_close_error [label="rx_error"] + S2 -> P_close_scary [label="scared" color="red"] + S2 -> P_close_unwelcome [label="rx_unwelcome"] + + S_closing [label="closing"] + S_closing -> P_closed [label="closed\nerror"] + S_closing -> S_closing [label="got_version\ngot_phase\nhappy\nscared\nclose"] + + P_closed [shape="box" label="W.closed(reason)"] + P_closed -> S_closed + S_closed [label="closed"] + + S0 -> P_closed [label="error"] + S1 -> P_closed [label="error"] + S2 -> P_closed [label="error"] + + {rank=same; Other S_closed} + Other [shape="box" style="dashed" + label="rx_welcome -> process (maybe rx_unwelcome)\nsend -> S.send\ngot_message -> got_version or got_phase\ngot_key -> W.got_key\ngot_verifier -> W.got_verifier\nallocate_code -> C.allocate_code\ninput_code -> C.input_code\nset_code -> C.set_code" + ] + + +} diff --git a/docs/state-machines/code.dot b/docs/state-machines/code.dot new file mode 100644 index 0000000..078950c --- /dev/null +++ b/docs/state-machines/code.dot @@ -0,0 +1,34 @@ +digraph { + + start [label="C:\nCode\n(management)" style="dotted"] + {rank=same; start S0} + start -> S0 [style="invis"] + S0 [label="S0:\nidle"] + S0 -> P0_got_code [label="set_code\n(code)"] + P0_got_code [shape="box" label="N.set_nameplate"] + P0_got_code -> P_done + P_done [shape="box" label="K.got_code\nB.got_code"] + P_done -> S4 + S4 [label="S4: known" color="green"] + + {rank=same; S1_inputting_nameplate S3_allocating} + {rank=same; P0_got_code P1_set_nameplate P3_got_nameplate} + S0 -> P_input [label="input_code"] + P_input [shape="box" label="I.start\n(helper)"] + P_input -> S1_inputting_nameplate + S1_inputting_nameplate [label="S1:\ninputting\nnameplate"] + S1_inputting_nameplate -> P1_set_nameplate [label="got_nameplate\n(nameplate)"] + P1_set_nameplate [shape="box" label="N.set_nameplate"] + P1_set_nameplate -> S2_inputting_words + S2_inputting_words [label="S2:\ninputting\nwords"] + S2_inputting_words -> P_done [label="finished_input\n(code)"] + + S0 -> P_allocate [label="allocate_code\n(length,\nwordlist)"] + P_allocate [shape="box" label="A.allocate\n(length, wordlist)"] + P_allocate -> S3_allocating + S3_allocating [label="S3:\nallocating"] + S3_allocating -> P3_got_nameplate [label="allocated\n(nameplate,\ncode)"] + P3_got_nameplate [shape="box" label="N.set_nameplate"] + P3_got_nameplate -> P_done + +} diff --git a/docs/state-machines/input.dot b/docs/state-machines/input.dot new file mode 100644 index 0000000..580d2b9 --- /dev/null +++ b/docs/state-machines/input.dot @@ -0,0 +1,43 @@ +digraph { + + start [label="I:\nCode\nInput" style="dotted"] + {rank=same; start S0} + start -> S0 [style="invis"] + S0 [label="S0:\nidle"] + + S0 -> P0_list_nameplates [label="start"] + P0_list_nameplates [shape="box" label="L.refresh"] + P0_list_nameplates -> S1 + S1 [label="S1: typing\nnameplate" color="orange"] + + {rank=same; foo P0_list_nameplates} + S1 -> foo [label="refresh_nameplates" color="orange" fontcolor="orange"] + foo [style="dashed" label=""] + foo -> P0_list_nameplates + + S1 -> P1_record [label="got_nameplates"] + P1_record [shape="box" label="record\nnameplates"] + P1_record -> S1 + + S1 -> P1_claim [label="choose_nameplate" color="orange" fontcolor="orange"] + P1_claim [shape="box" label="stash nameplate\nC.got_nameplate"] + P1_claim -> S2 + S2 [label="S2: typing\ncode\n(no wordlist)"] + S2 -> S2 [label="got_nameplates"] + S2 -> P2_stash_wordlist [label="got_wordlist"] + P2_stash_wordlist [shape="box" label="stash wordlist"] + P2_stash_wordlist -> S3 + S2 -> P_done [label="choose_words" color="orange" fontcolor="orange"] + S3 [label="S3: typing\ncode\n(yes wordlist)"] + S3 -> S3 [label="got_nameplates"] + S3 -> P_done [label="choose_words" color="orange" fontcolor="orange"] + P_done [shape="box" label="build code\nC.finished_input(code)"] + P_done -> S4 + S4 [label="S4: done" color="green"] + S4 -> S4 [label="got_nameplates\ngot_wordlist"] + + other [shape="box" style="dotted" color="orange" fontcolor="orange" + label="h.refresh_nameplates()\nh.get_nameplate_completions(prefix)\nh.choose_nameplate(nameplate)\nh.get_word_completions(prefix)\nh.choose_words(words)" + ] + {rank=same; S4 other} +} diff --git a/docs/state-machines/key.dot b/docs/state-machines/key.dot new file mode 100644 index 0000000..feda71b --- /dev/null +++ b/docs/state-machines/key.dot @@ -0,0 +1,63 @@ +digraph { + + /* could shave a RTT by committing to the nameplate early, before + finishing the rest of the code input. While the user is still + typing/completing the code, we claim the nameplate, open the mailbox, + and retrieve the peer's PAKE message. Then as soon as the user + finishes entering the code, we build our own PAKE message, send PAKE, + compute the key, send VERSION. Starting from the Return, this saves + two round trips. OTOH it adds consequences to hitting Tab. */ + + start [label="Key\nMachine" style="dotted"] + + /* two connected state machines: the first just puts the messages in + the right order, the second handles PAKE */ + + {rank=same; SO_00 PO_got_code SO_10} + {rank=same; SO_01 PO_got_both SO_11} + SO_00 [label="S00"] + SO_01 [label="S01: pake"] + SO_10 [label="S10: code"] + SO_11 [label="S11: both"] + SO_00 -> SO_01 [label="got_pake\n(early)"] + SO_00 -> PO_got_code [label="got_code"] + PO_got_code [shape="box" label="K1.got_code"] + PO_got_code -> SO_10 + SO_01 -> PO_got_both [label="got_code"] + PO_got_both [shape="box" label="K1.got_code\nK1.got_pake"] + PO_got_both -> SO_11 + SO_10 -> PO_got_pake [label="got_pake"] + PO_got_pake [shape="box" label="K1.got_pake"] + PO_got_pake -> SO_11 + + S0 [label="S0: know\nnothing"] + S0 -> P0_build [label="got_code"] + + P0_build [shape="box" label="build_pake\nM.add_message(pake)"] + P0_build -> S1 + S1 [label="S1: know\ncode"] + + /* the Mailbox will deliver each message exactly once, but doesn't + guarantee ordering: if Alice starts the process, then disconnects, + then Bob starts (reading PAKE, sending both his PAKE and his VERSION + phase), then Alice will see both PAKE and VERSION on her next + connect, and might get the VERSION first. + + The Wormhole will queue inbound messages that it isn't ready for. The + wormhole shim that lets applications do w.get(phase=) must do + something similar, queueing inbound messages until it sees one for + the phase it currently cares about.*/ + + S1 -> P_mood_scary [label="got_pake\npake bad"] + P_mood_scary [shape="box" color="red" label="W.scared"] + P_mood_scary -> S5 [color="red"] + S5 [label="S5:\nscared" color="red"] + S1 -> P1_compute [label="got_pake\npake good"] + #S1 -> P_mood_lonely [label="close"] + + P1_compute [label="compute_key\nM.add_message(version)\nB.got_key\nR.got_key" shape="box"] + P1_compute -> S4 + + S4 [label="S4: know_key" color="green"] + +} diff --git a/docs/state-machines/lister.dot b/docs/state-machines/lister.dot new file mode 100644 index 0000000..03ddd32 --- /dev/null +++ b/docs/state-machines/lister.dot @@ -0,0 +1,39 @@ +digraph { + {rank=same; title S0A S0B} + title [label="(Nameplate)\nLister" style="dotted"] + + S0A [label="S0A:\nnot wanting\nunconnected"] + S0B [label="S0B:\nnot wanting\nconnected" color="orange"] + + S0A -> S0B [label="connected"] + S0B -> S0A [label="lost"] + + S0A -> S1A [label="refresh"] + S0B -> P_tx [label="refresh" color="orange" fontcolor="orange"] + + S0A -> P_tx [style="invis"] + + {rank=same; S1A P_tx S1B P_notify} + + S1A [label="S1A:\nwant list\nunconnected"] + S1B [label="S1B:\nwant list\nconnected" color="orange"] + + S1A -> P_tx [label="connected"] + P_tx [shape="box" label="RC.tx_list()" color="orange"] + P_tx -> S1B [color="orange"] + S1B -> S1A [label="lost"] + + S1A -> foo [label="refresh"] + foo [label="" style="dashed"] + foo -> S1A + + S1B -> foo2 [label="refresh"] + foo2 [label="" style="dashed"] + foo2 -> P_tx + + S0B -> P_notify [label="rx_nameplates"] + S1B -> P_notify [label="rx_nameplates" color="orange" fontcolor="orange"] + P_notify [shape="box" label="I.got_nameplates()"] + P_notify -> S0B + +} diff --git a/docs/state-machines/machines.dot b/docs/state-machines/machines.dot new file mode 100644 index 0000000..eccc96d --- /dev/null +++ b/docs/state-machines/machines.dot @@ -0,0 +1,115 @@ +digraph { + Wormhole [shape="oval" color="blue" fontcolor="blue"] + Boss [shape="box" label="Boss\n(manager)" + color="blue" fontcolor="blue"] + Nameplate [label="Nameplate\n(claimer)" + shape="box" color="blue" fontcolor="blue"] + Mailbox [label="Mailbox\n(opener)" + shape="box" color="blue" fontcolor="blue"] + Connection [label="Rendezvous\nConnector" + shape="oval" color="blue" fontcolor="blue"] + #websocket [color="blue" fontcolor="blue"] + Order [shape="box" label="Ordering" color="blue" fontcolor="blue"] + Key [shape="box" label="Key" color="blue" fontcolor="blue"] + Send [shape="box" label="Send" color="blue" fontcolor="blue"] + Receive [shape="box" label="Receive" color="blue" fontcolor="blue"] + Code [shape="box" label="Code" color="blue" fontcolor="blue"] + Lister [shape="box" label="(nameplate)\nLister" + color="blue" fontcolor="blue"] + Allocator [shape="box" label="(nameplate)\nAllocator" + color="blue" fontcolor="blue"] + Input [shape="box" label="(interactive\ncode)\nInput" + color="blue" fontcolor="blue"] + Terminator [shape="box" color="blue" fontcolor="blue"] + InputHelperAPI [shape="oval" label="input\nhelper\nAPI" + color="blue" fontcolor="blue"] + + #Connection -> websocket [color="blue"] + #Connection -> Order [color="blue"] + + Wormhole -> Boss [style="dashed" + label="allocate_code\ninput_code\nset_code\nsend\nclose\n(once)" + color="red" fontcolor="red"] + #Wormhole -> Boss [color="blue"] + Boss -> Wormhole [style="dashed" label="got_code\ngot_key\ngot_verifier\ngot_version\nreceived (seq)\nclosed\n(once)"] + + #Boss -> Connection [color="blue"] + Boss -> Connection [style="dashed" label="start" + color="red" fontcolor="red"] + Connection -> Boss [style="dashed" label="rx_welcome\nrx_error\nerror"] + + Boss -> Send [style="dashed" color="red" fontcolor="red" label="send"] + + #Boss -> Mailbox [color="blue"] + Mailbox -> Order [style="dashed" label="got_message (once)"] + Key -> Boss [style="dashed" label="got_key\nscared"] + Order -> Key [style="dashed" label="got_pake"] + Order -> Receive [style="dashed" label="got_message"] + #Boss -> Key [color="blue"] + Key -> Mailbox [style="dashed" + label="add_message (pake)\nadd_message (version)"] + Receive -> Send [style="dashed" label="got_verified_key"] + Send -> Mailbox [style="dashed" color="red" fontcolor="red" + label="add_message (phase)"] + + Key -> Receive [style="dashed" label="got_key"] + Receive -> Boss [style="dashed" + label="happy\nscared\ngot_verifier\ngot_message"] + Nameplate -> Connection [style="dashed" + label="tx_claim\ntx_release"] + Connection -> Nameplate [style="dashed" + label="connected\nlost\nrx_claimed\nrx_released"] + Mailbox -> Nameplate [style="dashed" label="release"] + Nameplate -> Mailbox [style="dashed" label="got_mailbox"] + Nameplate -> Input [style="dashed" label="got_wordlist"] + + Mailbox -> Connection [style="dashed" color="red" fontcolor="red" + label="tx_open\ntx_add\ntx_close" + ] + Connection -> Mailbox [style="dashed" + label="connected\nlost\nrx_message\nrx_closed\nstopped"] + + Connection -> Lister [style="dashed" + label="connected\nlost\nrx_nameplates" + ] + Lister -> Connection [style="dashed" + label="tx_list" + ] + + #Boss -> Code [color="blue"] + Connection -> Allocator [style="dashed" + label="connected\nlost\nrx_allocated"] + Allocator -> Connection [style="dashed" color="red" fontcolor="red" + label="tx_allocate" + ] + Lister -> Input [style="dashed" + label="got_nameplates" + ] + #Code -> Lister [color="blue"] + Input -> Lister [style="dashed" color="red" fontcolor="red" + label="refresh" + ] + Boss -> Code [style="dashed" color="red" fontcolor="red" + label="allocate_code\ninput_code\nset_code"] + Code -> Boss [style="dashed" label="got_code"] + Code -> Key [style="dashed" label="got_code"] + Code -> Nameplate [style="dashed" label="set_nameplate"] + + Code -> Input [style="dashed" color="red" fontcolor="red" label="start"] + Input -> Code [style="dashed" label="got_nameplate\nfinished_input"] + InputHelperAPI -> Input [label="refresh_nameplates\nget_nameplate_completions\nchoose_nameplate\nget_word_completions\nchoose_words" color="orange" fontcolor="orange"] + + Code -> Allocator [style="dashed" color="red" fontcolor="red" + label="allocate"] + Allocator -> Code [style="dashed" label="allocated"] + + Nameplate -> Terminator [style="dashed" label="nameplate_done"] + Mailbox -> Terminator [style="dashed" label="mailbox_done"] + Terminator -> Nameplate [style="dashed" label="close"] + Terminator -> Mailbox [style="dashed" label="close"] + Terminator -> Connection [style="dashed" label="stop"] + Connection -> Terminator [style="dashed" label="stopped"] + Terminator -> Boss [style="dashed" label="closed\n(once)"] + Boss -> Terminator [style="dashed" color="red" fontcolor="red" + label="close"] +} diff --git a/docs/state-machines/mailbox.dot b/docs/state-machines/mailbox.dot new file mode 100644 index 0000000..9bcd964 --- /dev/null +++ b/docs/state-machines/mailbox.dot @@ -0,0 +1,98 @@ +digraph { + /* new idea */ + + title [label="Mailbox\nMachine" style="dotted"] + + {rank=same; S0A S0B} + S0A [label="S0A:\nunknown"] + S0A -> S0B [label="connected"] + S0B [label="S0B:\nunknown\n(bound)" color="orange"] + + S0B -> S0A [label="lost"] + + S0A -> P0A_queue [label="add_message" style="dotted"] + P0A_queue [shape="box" label="queue" style="dotted"] + P0A_queue -> S0A [style="dotted"] + S0B -> P0B_queue [label="add_message" style="dotted"] + P0B_queue [shape="box" label="queue" style="dotted"] + P0B_queue -> S0B [style="dotted"] + + subgraph {rank=same; S1A P_open} + S0A -> S1A [label="got_mailbox"] + S1A [label="S1A:\nknown"] + S1A -> P_open [label="connected"] + S1A -> P1A_queue [label="add_message" style="dotted"] + P1A_queue [shape="box" label="queue" style="dotted"] + P1A_queue -> S1A [style="dotted"] + S1A -> S2A [style="invis"] + P_open -> P2_connected [style="invis"] + + S0A -> S2A [style="invis"] + S0B -> P_open [label="got_mailbox" color="orange" fontcolor="orange"] + P_open [shape="box" + label="store mailbox\nRC.tx_open\nRC.tx_add(queued)" color="orange"] + P_open -> S2B [color="orange"] + + subgraph {rank=same; S2A S2B P2_connected} + S2A [label="S2A:\nknown\nmaybe opened"] + S2B [label="S2B:\nopened\n(bound)" color="green"] + S2A -> P2_connected [label="connected"] + S2B -> S2A [label="lost"] + + P2_connected [shape="box" label="RC.tx_open\nRC.tx_add(queued)"] + P2_connected -> S2B + + S2A -> P2_queue [label="add_message" style="dotted"] + P2_queue [shape="box" label="queue" style="dotted"] + P2_queue -> S2A [style="dotted"] + + S2B -> P2_send [label="add_message"] + P2_send [shape="box" label="queue\nRC.tx_add(msg)"] + P2_send -> S2B + + {rank=same; P2_send P2_close P2_process_theirs} + P2_process_theirs -> P2_close [style="invis"] + S2B -> P2_process_ours [label="rx_message\n(ours)"] + P2_process_ours [shape="box" label="dequeue"] + P2_process_ours -> S2B + S2B -> P2_process_theirs [label="rx_message\n(theirs)" + color="orange" fontcolor="orange"] + P2_process_theirs [shape="box" color="orange" + label="N.release\nO.got_message if new\nrecord" + ] + P2_process_theirs -> S2B [color="orange"] + + S2B -> P2_close [label="close" color="red"] + P2_close [shape="box" label="RC.tx_close" color="red"] + P2_close -> S3B [color="red"] + + subgraph {rank=same; S3A P3_connected S3B} + S3A [label="S3A:\nclosing"] + S3A -> P3_connected [label="connected"] + P3_connected [shape="box" label="RC.tx_close"] + P3_connected -> S3B + #S3A -> S3A [label="add_message"] # implicit + S3B [label="S3B:\nclosing\n(bound)" color="red"] + S3B -> S3B [label="add_message\nrx_message\nclose"] + S3B -> S3A [label="lost"] + + subgraph {rank=same; P3A_done P3B_done} + P3A_done [shape="box" label="T.mailbox_done" color="red"] + P3A_done -> S4A + S3B -> P3B_done [label="rx_closed" color="red"] + P3B_done [shape="box" label="T.mailbox_done" color="red"] + P3B_done -> S4B + + subgraph {rank=same; S4A S4B} + S4A [label="S4A:\nclosed"] + S4B [label="S4B:\nclosed"] + S4A -> S4B [label="connected"] + S4B -> S4A [label="lost"] + S4B -> S4B [label="add_message\nrx_message\nclose"] # is "close" needed? + + S0A -> P3A_done [label="close" color="red"] + S0B -> P3B_done [label="close" color="red"] + S1A -> P3A_done [label="close" color="red"] + S2A -> S3A [label="close" color="red"] + +} diff --git a/docs/state-machines/nameplate.dot b/docs/state-machines/nameplate.dot new file mode 100644 index 0000000..8ddeabd --- /dev/null +++ b/docs/state-machines/nameplate.dot @@ -0,0 +1,101 @@ +digraph { + /* new idea */ + + title [label="Nameplate\nMachine" style="dotted"] + title -> S0A [style="invis"] + + {rank=same; S0A S0B} + S0A [label="S0A:\nknow nothing"] + S0B [label="S0B:\nknow nothing\n(bound)" color="orange"] + S0A -> S0B [label="connected"] + S0B -> S0A [label="lost"] + + S0A -> S1A [label="set_nameplate"] + S0B -> P2_connected [label="set_nameplate" color="orange" fontcolor="orange"] + + S1A [label="S1A:\nnever claimed"] + S1A -> P2_connected [label="connected"] + + S1A -> S2A [style="invis"] + S1B [style="invis"] + S0B -> S1B [style="invis"] + S1B -> S2B [style="invis"] + {rank=same; S1A S1B} + S1A -> S1B [style="invis"] + + {rank=same; S2A P2_connected S2B} + S2A [label="S2A:\nmaybe claimed"] + S2A -> P2_connected [label="connected"] + P2_connected [shape="box" + label="RC.tx_claim" color="orange"] + P2_connected -> S2B [color="orange"] + S2B [label="S2B:\nmaybe claimed\n(bound)" color="orange"] + + #S2B -> S2A [label="lost"] # causes bad layout + S2B -> foo2 [label="lost"] + foo2 [label="" style="dashed"] + foo2 -> S2A + + S2A -> S3A [label="(none)" style="invis"] + S2B -> P_open [label="rx_claimed" color="orange" fontcolor="orange"] + P_open [shape="box" label="I.got_wordlist\nM.got_mailbox" color="orange"] + P_open -> S3B [color="orange"] + + subgraph {rank=same; S3A S3B} + S3A [label="S3A:\nclaimed"] + S3B [label="S3B:\nclaimed\n(bound)" color="orange"] + S3A -> S3B [label="connected"] + S3B -> foo3 [label="lost"] + foo3 [label="" style="dashed"] + foo3 -> S3A + + #S3B -> S3B [label="rx_claimed"] # shouldn't happen + + S3B -> P3_release [label="release" color="orange" fontcolor="orange"] + P3_release [shape="box" color="orange" label="RC.tx_release"] + P3_release -> S4B [color="orange"] + + subgraph {rank=same; S4A P4_connected S4B} + S4A [label="S4A:\nmaybe released\n"] + + S4B [label="S4B:\nmaybe released\n(bound)" color="orange"] + S4A -> P4_connected [label="connected"] + P4_connected [shape="box" label="RC.tx_release"] + S4B -> S4B [label="release"] + + P4_connected -> S4B + S4B -> foo4 [label="lost"] + foo4 [label="" style="dashed"] + foo4 -> S4A + + S4A -> S5B [style="invis"] + P4_connected -> S5B [style="invis"] + + subgraph {rank=same; P5A_done P5B_done} + S4B -> P5B_done [label="rx released" color="orange" fontcolor="orange"] + P5B_done [shape="box" label="T.nameplate_done" color="orange"] + P5B_done -> S5B [color="orange"] + + subgraph {rank=same; S5A S5B} + S5A [label="S5A:\nreleased"] + S5A -> S5B [label="connected"] + S5B -> S5A [label="lost"] + S5B [label="S5B:\nreleased" color="green"] + + S5B -> S5B [label="release\nclose"] + + P5A_done [shape="box" label="T.nameplate_done"] + P5A_done -> S5A + + S0A -> P5A_done [label="close" color="red"] + S1A -> P5A_done [label="close" color="red"] + S2A -> S4A [label="close" color="red"] + S3A -> S4A [label="close" color="red"] + S4A -> S4A [label="close" color="red"] + S0B -> P5B_done [label="close" color="red"] + S2B -> P3_release [label="close" color="red"] + S3B -> P3_release [label="close" color="red"] + S4B -> S4B [label="close" color="red"] + + +} diff --git a/docs/state-machines/order.dot b/docs/state-machines/order.dot new file mode 100644 index 0000000..202bc10 --- /dev/null +++ b/docs/state-machines/order.dot @@ -0,0 +1,35 @@ +digraph { + start [label="Order\nMachine" style="dotted"] + /* our goal: deliver PAKE before anything else */ + + {rank=same; S0 P0_other} + {rank=same; S1 P1_other} + + S0 [label="S0: no pake" color="orange"] + S1 [label="S1: yes pake" color="green"] + S0 -> P0_pake [label="got_pake" + color="orange" fontcolor="orange"] + P0_pake [shape="box" color="orange" + label="K.got_pake\ndrain queue:\n[R.got_message]" + ] + P0_pake -> S1 [color="orange"] + S0 -> P0_other [label="got_version\ngot_phase" style="dotted"] + P0_other [shape="box" label="queue" style="dotted"] + P0_other -> S0 [style="dotted"] + + S1 -> P1_other [label="got_version\ngot_phase"] + P1_other [shape="box" label="R.got_message"] + P1_other -> S1 + + + /* the Mailbox will deliver each message exactly once, but doesn't + guarantee ordering: if Alice starts the process, then disconnects, + then Bob starts (reading PAKE, sending both his PAKE and his VERSION + phase), then Alice will see both PAKE and VERSION on her next + connect, and might get the VERSION first. + + The Wormhole will queue inbound messages that it isn't ready for. The + wormhole shim that lets applications do w.get(phase=) must do + something similar, queueing inbound messages until it sees one for + the phase it currently cares about.*/ +} diff --git a/docs/state-machines/receive.dot b/docs/state-machines/receive.dot new file mode 100644 index 0000000..ba757e1 --- /dev/null +++ b/docs/state-machines/receive.dot @@ -0,0 +1,39 @@ +digraph { + + /* could shave a RTT by committing to the nameplate early, before + finishing the rest of the code input. While the user is still + typing/completing the code, we claim the nameplate, open the mailbox, + and retrieve the peer's PAKE message. Then as soon as the user + finishes entering the code, we build our own PAKE message, send PAKE, + compute the key, send VERSION. Starting from the Return, this saves + two round trips. OTOH it adds consequences to hitting Tab. */ + + start [label="Receive\nMachine" style="dotted"] + + S0 [label="S0:\nunknown key" color="orange"] + S0 -> P0_got_key [label="got_key" color="orange"] + + P0_got_key [shape="box" label="record key" color="orange"] + P0_got_key -> S1 [color="orange"] + + S1 [label="S1:\nunverified key" color="orange"] + S1 -> P_mood_scary [label="got_message\n(bad)"] + S1 -> P1_accept_msg [label="got_message\n(good)" color="orange"] + P1_accept_msg [shape="box" label="S.got_verified_key\nB.happy\nB.got_verifier\nB.got_message" + color="orange"] + P1_accept_msg -> S2 [color="orange"] + + S2 [label="S2:\nverified key" color="green"] + + S2 -> P2_accept_msg [label="got_message\n(good)" color="orange"] + S2 -> P_mood_scary [label="got_message(bad)"] + + P2_accept_msg [label="B.got_message" shape="box" color="orange"] + P2_accept_msg -> S2 [color="orange"] + + P_mood_scary [shape="box" label="B.scared" color="red"] + P_mood_scary -> S3 [color="red"] + + S3 [label="S3:\nscared" color="red"] + S3 -> S3 [label="got_message"] +} diff --git a/docs/state-machines/send.dot b/docs/state-machines/send.dot new file mode 100644 index 0000000..91ed067 --- /dev/null +++ b/docs/state-machines/send.dot @@ -0,0 +1,19 @@ +digraph { + start [label="Send\nMachine" style="dotted"] + + {rank=same; S0 P0_queue} + {rank=same; S1 P1_send} + + S0 [label="S0: unknown\nkey"] + S0 -> P0_queue [label="send" style="dotted"] + P0_queue [shape="box" label="queue" style="dotted"] + P0_queue -> S0 [style="dotted"] + S0 -> P0_got_key [label="got_verified_key"] + + P0_got_key [shape="box" label="drain queue:\n[encrypt\n M.add_message]"] + P0_got_key -> S1 + S1 [label="S1: verified\nkey"] + S1 -> P1_send [label="send"] + P1_send [shape="box" label="encrypt\nM.add_message"] + P1_send -> S1 +} diff --git a/docs/state-machines/terminator.dot b/docs/state-machines/terminator.dot new file mode 100644 index 0000000..749eb3c --- /dev/null +++ b/docs/state-machines/terminator.dot @@ -0,0 +1,50 @@ +digraph { + /* M_close pathways */ + title [label="Terminator\nMachine" style="dotted"] + + initial [style="invis"] + initial -> Snmo [style="dashed"] + + Snmo [label="Snmo:\nnameplate active\nmailbox active\nopen" color="orange"] + Sno [label="Sno:\nnameplate active\nmailbox done\nopen"] + Smo [label="Smo:\nnameplate done\nmailbox active\nopen" color="green"] + S0o [label="S0o:\nnameplate done\nmailbox done\nopen"] + + Snmo -> Sno [label="mailbox_done"] + Snmo -> Smo [label="nameplate_done" color="orange"] + Sno -> S0o [label="nameplate_done"] + Smo -> S0o [label="mailbox_done"] + + Snmo -> Snm [label="close"] + Sno -> Sn [label="close"] + Smo -> Sm [label="close" color="red"] + S0o -> P_stop [label="close"] + + Snm [label="Snm:\nnameplate active\nmailbox active\nclosing" + style="dashed"] + Sn [label="Sn:\nnameplate active\nmailbox done\nclosing" + style="dashed"] + Sm [label="Sm:\nnameplate done\nmailbox active\nclosing" + style="dashed" color="red"] + + Snm -> Sn [label="mailbox_done"] + Snm -> Sm [label="nameplate_done"] + Sn -> P_stop [label="nameplate_done"] + Sm -> P_stop [label="mailbox_done" color="red"] + + {rank=same; S_stopping Pss S_stopped} + P_stop [shape="box" label="RC.stop" color="red"] + P_stop -> S_stopping [color="red"] + + S_stopping [label="S_stopping" color="red"] + S_stopping -> Pss [label="stopped"] + Pss [shape="box" label="B.closed"] + Pss -> S_stopped + + S_stopped [label="S_stopped"] + + other [shape="box" style="dashed" + label="close -> N.close, M.close"] + + +} diff --git a/docs/w.dot b/docs/w.dot new file mode 100644 index 0000000..244ee2d --- /dev/null +++ b/docs/w.dot @@ -0,0 +1,86 @@ +digraph { + + /* + NM_start [label="Nameplate\nMachine" style="dotted"] + NM_start -> NM_S_unclaimed [style="invis"] + NM_S_unclaimed [label="no nameplate"] + NM_S_unclaimed -> NM_S_unclaimed [label="NM_release()"] + NM_P_set_nameplate [shape="box" label="post_claim()"] + NM_S_unclaimed -> NM_P_set_nameplate [label="NM_set_nameplate()"] + NM_S_claiming [label="claim pending"] + NM_P_set_nameplate -> NM_S_claiming + NM_S_claiming -> NM_P_rx_claimed [label="rx claimed"] + NM_P_rx_claimed [label="MM_set_mailbox()" shape="box"] + NM_P_rx_claimed -> NM_S_claimed + NM_S_claimed [label="claimed"] + NM_S_claimed -> NM_P_release [label="NM_release()"] + NM_P_release [shape="box" label="post_release()"] + NM_P_release -> NM_S_releasing + NM_S_releasing [label="release pending"] + NM_S_releasing -> NM_S_releasing [label="NM_release()"] + NM_S_releasing -> NM_S_released [label="rx released"] + NM_S_released [label="released"] + NM_S_released -> NM_S_released [label="NM_release()"] + */ + + /* + MM_start [label="Mailbox\nMachine" style="dotted"] + MM_start -> MM_S_want_mailbox [style="invis"] + MM_S_want_mailbox [label="want mailbox"] + MM_S_want_mailbox -> MM_P_queue1 [label="MM_send()" style="dotted"] + MM_P_queue1 [shape="box" style="dotted" label="queue message"] + MM_P_queue1 -> MM_S_want_mailbox [style="dotted"] + MM_P_open_mailbox [shape="box" label="post_open()"] + MM_S_want_mailbox -> MM_P_open_mailbox [label="set_mailbox()"] + MM_P_send_queued [shape="box" label="post add() for\nqueued messages"] + MM_P_open_mailbox -> MM_P_send_queued + MM_P_send_queued -> MM_S_open + MM_S_open [label="open\n(unused)"] + MM_S_open -> MM_P_send1 [label="MM_send()"] + MM_P_send1 [shape="box" label="post add()\nfor message"] + MM_P_send1 -> MM_S_open + MM_S_open -> MM_P_release1 [label="MM_close()"] + MM_P_release1 [shape="box" label="NM_release()"] + MM_P_release1 -> MM_P_close + + MM_S_open -> MM_P_rx [label="rx message"] + MM_P_rx [shape="box" label="WM_rx_pake()\nor WM_rx_msg()"] + MM_P_rx -> MM_P_release2 + MM_P_release2 [shape="box" label="NM_release()"] + MM_P_release2 -> MM_S_used + MM_S_used [label="open\n(used)"] + MM_S_used -> MM_P_rx [label="rx message"] + MM_S_used -> MM_P_send2 [label="MM_send()"] + MM_P_send2 [shape="box" label="post add()\nfor message"] + MM_P_send2 -> MM_S_used + MM_S_used -> MM_P_close [label="MM_close()"] + MM_P_close [shape="box" label="post_close(mood)"] + MM_P_close -> MM_S_closing + MM_S_closing [label="waiting"] + MM_S_closing -> MM_S_closing [label="MM_close()"] + MM_S_closing -> MM_S_closed [label="rx closed"] + MM_S_closed [label="closed"] + MM_S_closed -> MM_S_closed [label="MM_close()"] + */ + + /* upgrading to new PAKE algorithm, the slower form (the faster form + puts the pake_abilities record in the nameplate_info message) */ + /* + P2_start [label="(PAKE\nupgrade)\nstart"] + P2_start -> P2_P_send_abilities [label="set_code()"] + P2_P_send_abilities [shape="box" label="send pake_abilities"] + P2_P_send_abilities -> P2_wondering + P2_wondering [label="waiting\nwondering"] + P2_wondering -> P2_P_send_pakev1 [label="rx pake_v1"] + P2_P_send_pakev1 [shape="box" label="send pake_v1"] + P2_P_send_pakev1 -> P2_P_process_v1 + P2_P_process_v1 [shape="box" label="process v1"] + P2_wondering -> P2_P_find_max [label="rx pake_abilities"] + P2_P_find_max [shape="box" label="find max"] + P2_P_find_max -> P2_P_send_pakev2 + P2_P_send_pakev2 + P2_P_send_pakev2 [shape="box" label="send pake_v2"] + P2_P_send_pakev2 -> P2_P_process_v2 [label="rx pake_v2"] + P2_P_process_v2 [shape="box" label="process v2"] + */ +} diff --git a/misc/demo-journal.py b/misc/demo-journal.py new file mode 100644 index 0000000..ff76636 --- /dev/null +++ b/misc/demo-journal.py @@ -0,0 +1,270 @@ +import os, sys, json +from twisted.internet import task, defer, endpoints +from twisted.application import service, internet +from twisted.web import server, static, resource +from wormhole import journal, wormhole + +# considerations for state management: +# * be somewhat principled about the data (e.g. have a schema) +# * discourage accidental schema changes +# * avoid surprise mutations by app code (don't hand out mutables) +# * discourage app from keeping state itself: make state object easy enough +# to use for everything. App should only hold objects that are active +# (Services, subscribers, etc). App must wire up these objects each time. + +class State(object): + @classmethod + def create_empty(klass): + self = klass() + # to avoid being tripped up by state-mutation side-effect bugs, we + # hold the serialized state in RAM, and re-deserialize it each time + # someone asks for a piece of it. + empty = {"version": 1, + "invitations": {}, # iid->invitation_state + "contacts": [], + } + self._bytes = json.dumps(empty).encode("utf-8") + return self + + @classmethod + def from_filename(klass, fn): + self = klass() + with open(fn, "rb") as f: + bytes = f.read() + self._bytes = bytes + # version check + data = self._as_data() + assert data["version"] == 1 + # schema check? + return self + + def save_to_filename(self, fn): + tmpfn = fn+".tmp" + with open(tmpfn, "wb") as f: + f.write(self._bytes) + os.rename(tmpfn, fn) + + def _as_data(self): + return json.loads(bytes.decode("utf-8")) + + @contextlib.contextmanager + def _mutate(self): + data = self._as_data() + yield data # mutable + self._bytes = json.dumps(data).encode("utf-8") + + def get_all_invitations(self): + return self._as_data()["invitations"] + def add_invitation(self, iid, invitation_state): + with self._mutate() as data: + data["invitations"][iid] = invitation_state + def update_invitation(self, iid, invitation_state): + with self._mutate() as data: + assert iid in data["invitations"] + data["invitations"][iid] = invitation_state + def remove_invitation(self, iid): + with self._mutate() as data: + del data["invitations"][iid] + + def add_contact(self, contact): + with self._mutate() as data: + data["contacts"].append(contact) + + + +class Root(resource.Resource): + pass + +class Status(resource.Resource): + def __init__(self, c): + resource.Resource.__init__(self) + self._call = c + def render_GET(self, req): + data = self._call() + req.setHeader(b"content-type", "text/plain") + return data + +class Action(resource.Resource): + def __init__(self, c): + resource.Resource.__init__(self) + self._call = c + def render_POST(self, req): + req.setHeader(b"content-type", "text/plain") + try: + args = json.load(req.content) + except ValueError: + req.setResponseCode(500) + return b"bad JSON" + data = self._call(args) + return data + +class Agent(service.MultiService): + def __init__(self, basedir, reactor): + service.MultiService.__init__(self) + self._basedir = basedir + self._reactor = reactor + + root = Root() + site = server.Site(root) + ep = endpoints.serverFromString(reactor, "tcp:8220") + internet.StreamServerEndpointService(ep, site).setServiceParent(self) + + self._jm = journal.JournalManager(self._save_state) + + root.putChild(b"", static.Data("root", "text/plain")) + root.putChild(b"list-invitations", Status(self._list_invitations)) + root.putChild(b"invite", Action(self._invite)) # {petname:} + root.putChild(b"accept", Action(self._accept)) # {petname:, code:} + + self._state_fn = os.path.join(self._basedir, "state.json") + self._state = State.from_filename(self._state_fn) + + self._wormholes = {} + for iid, invitation_state in self._state.get_all_invitations().items(): + def _dispatch(event, *args, **kwargs): + self._dispatch_wormhole_event(iid, event, *args, **kwargs) + w = wormhole.journaled_from_data(invitation_state["wormhole"], + reactor=self._reactor, + journal=self._jm, + event_handler=self, + event_handler_args=(iid,)) + self._wormholes[iid] = w + w.setServiceParent(self) + + + def _save_state(self): + self._state.save_to_filename(self._state_fn) + + def _list_invitations(self): + inv = self._state.get_all_invitations() + lines = ["%d: %s" % (iid, inv[iid]) for iid in sorted(inv)] + return b"\n".join(lines)+b"\n" + + def _invite(self, args): + print "invite", args + petname = args["petname"] + # it'd be better to use a unique object for the event_handler + # correlation, but we can't store them into the state database. I'm + # not 100% sure we need one for the database: maybe it should hold a + # list instead, and assign lookup keys at runtime. If they really + # need to be serializable, they should be allocated rather than + # random. + iid = random.randint(1,1000) + my_pubkey = random.randint(1,1000) + with self._jm.process(): + w = wormhole.journaled(reactor=self._reactor, journal=self._jm, + event_handler=self, + event_handler_args=(iid,)) + self._wormholes[iid] = w + w.setServiceParent(self) + w.get_code() # event_handler means code returns via callback + invitation_state = {"wormhole": w.to_data(), + "petname": petname, + "my_pubkey": my_pubkey, + } + self._state.add_invitation(iid, invitation_state) + return b"ok" + + def _accept(self, args): + print "accept", args + petname = args["petname"] + code = args["code"] + iid = random.randint(1,1000) + my_pubkey = random.randint(2,2000) + with self._jm.process(): + w = wormhole.journaled(reactor=self._reactor, journal=self._jm, + event_dispatcher=self, + event_dispatcher_args=(iid,)) + w.set_code(code) + md = {"my_pubkey": my_pubkey} + w.send(json.dumps(md).encode("utf-8")) + invitation_state = {"wormhole": w.to_data(), + "petname": petname, + "my_pubkey": my_pubkey, + } + self._state.add_invitation(iid, invitation_state) + return b"ok" + + # dispatch options: + # * register one function, which takes (eventname, *args) + # * to handle multiple wormholes, app must give is a closure + # * register multiple functions (one per event type) + # * register an object, with well-known method names + # * extra: register args and/or kwargs with the callback + # + # events to dispatch: + # generated_code(code) + # got_verifier(verifier_bytes) + # verified() + # got_data(data_bytes) + # closed() + + def wormhole_dispatch_got_code(self, code, iid): + # we're already in a jm.process() context + invitation_state = self._state.get_all_invitations()[iid] + invitation_state["code"] = code + self._state.update_invitation(iid, invitation_state) + self._wormholes[iid].set_code(code) + # notify UI subscribers to update the display + + def wormhole_dispatch_got_verifier(self, verifier, iid): + pass + def wormhole_dispatch_verified(self, _, iid): + pass + + def wormhole_dispatch_got_data(self, data, iid): + invitation_state = self._state.get_all_invitations()[iid] + md = json.loads(data.decode("utf-8")) + contact = {"petname": invitation_state["petname"], + "my_pubkey": invitation_state["my_pubkey"], + "their_pubkey": md["my_pubkey"], + } + self._state.add_contact(contact) + self._wormholes[iid].close() # now waiting for "closed" + + def wormhole_dispatch_closed(self, _, iid): + self._wormholes[iid].disownServiceParent() + del self._wormholes[iid] + self._state.remove_invitation(iid) + + + def handle_app_event(self, args, ack_f): # sample function + # Imagine here that the app has received a message (not + # wormhole-related) from some other server, and needs to act on it. + # Also imagine that ack_f() is how we tell the sender that they can + # stop sending the message, or how we ask our poller/subscriber + # client to send a DELETE message. If the process dies before ack_f() + # delivers whatever it needs to deliver, then in the next launch, + # handle_app_event() will be called again. + stuff = parse(args) + with self._jm.process(): + update_my_state() + self._jm.queue_outbound(ack_f) + +def create(reactor, basedir): + os.mkdir(basedir) + s = State.create_empty() + s.save(os.path.join(basedir, "state.json")) + return defer.succeed(None) + +def run(reactor, basedir): + a = Agent(basedir, reactor) + a.startService() + print "agent listening on http://localhost:8220/" + d = defer.Deferred() + return d + + + +if __name__ == "__main__": + command = sys.argv[1] + basedir = sys.argv[2] + if command == "create": + task.react(create, (basedir,)) + elif command == "run": + task.react(run, (basedir,)) + else: + print "Unrecognized subcommand '%s'" % command + sys.exit(1) + + diff --git a/setup.py b/setup.py index e8ff35b..24101ac 100644 --- a/setup.py +++ b/setup.py @@ -45,6 +45,7 @@ setup(name="magic-wormhole", "six", "twisted[tls]", "autobahn[twisted] >= 0.14.1", + "automat", "hkdf", "tqdm", "click", "humanize", diff --git a/src/wormhole/__init__.py b/src/wormhole/__init__.py index 74f4e66..c00af56 100644 --- a/src/wormhole/__init__.py +++ b/src/wormhole/__init__.py @@ -2,3 +2,8 @@ from ._version import get_versions __version__ = get_versions()['version'] del get_versions + +from .wormhole import create +from ._rlcompleter import input_with_completion + +__all__ = ["create", "input_with_completion", "__version__"] diff --git a/src/wormhole/_allocator.py b/src/wormhole/_allocator.py new file mode 100644 index 0000000..0644c55 --- /dev/null +++ b/src/wormhole/_allocator.py @@ -0,0 +1,75 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import provides +from automat import MethodicalMachine +from . import _interfaces + +@attrs +@implementer(_interfaces.IAllocator) +class Allocator(object): + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def wire(self, rendezvous_connector, code): + self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) + self._C = _interfaces.ICode(code) + + @m.state(initial=True) + def S0A_idle(self): pass # pragma: no cover + @m.state() + def S0B_idle_connected(self): pass # pragma: no cover + @m.state() + def S1A_allocating(self): pass # pragma: no cover + @m.state() + def S1B_allocating_connected(self): pass # pragma: no cover + @m.state() + def S2_done(self): pass # pragma: no cover + + # from Code + @m.input() + def allocate(self, length, wordlist): pass + + # from RendezvousConnector + @m.input() + def connected(self): pass + @m.input() + def lost(self): pass + @m.input() + def rx_allocated(self, nameplate): pass + + @m.output() + def stash(self, length, wordlist): + self._length = length + self._wordlist = _interfaces.IWordlist(wordlist) + @m.output() + def stash_and_RC_rx_allocate(self, length, wordlist): + self._length = length + self._wordlist = _interfaces.IWordlist(wordlist) + self._RC.tx_allocate() + @m.output() + def RC_tx_allocate(self): + self._RC.tx_allocate() + @m.output() + def build_and_notify(self, nameplate): + words = self._wordlist.choose_words(self._length) + code = nameplate + "-" + words + self._C.allocated(nameplate, code) + + S0A_idle.upon(connected, enter=S0B_idle_connected, outputs=[]) + S0B_idle_connected.upon(lost, enter=S0A_idle, outputs=[]) + + S0A_idle.upon(allocate, enter=S1A_allocating, outputs=[stash]) + S0B_idle_connected.upon(allocate, enter=S1B_allocating_connected, + outputs=[stash_and_RC_rx_allocate]) + + S1A_allocating.upon(connected, enter=S1B_allocating_connected, + outputs=[RC_tx_allocate]) + S1B_allocating_connected.upon(lost, enter=S1A_allocating, outputs=[]) + + S1B_allocating_connected.upon(rx_allocated, enter=S2_done, + outputs=[build_and_notify]) + + S2_done.upon(connected, enter=S2_done, outputs=[]) + S2_done.upon(lost, enter=S2_done, outputs=[]) diff --git a/src/wormhole/_boss.py b/src/wormhole/_boss.py new file mode 100644 index 0000000..2c93f0b --- /dev/null +++ b/src/wormhole/_boss.py @@ -0,0 +1,343 @@ +from __future__ import print_function, absolute_import, unicode_literals +import re +import six +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import provides, instance_of +from twisted.python import log +from automat import MethodicalMachine +from . import _interfaces +from ._nameplate import Nameplate +from ._mailbox import Mailbox +from ._send import Send +from ._order import Order +from ._key import Key +from ._receive import Receive +from ._rendezvous import RendezvousConnector +from ._lister import Lister +from ._allocator import Allocator +from ._input import Input +from ._code import Code +from ._terminator import Terminator +from ._wordlist import PGPWordList +from .errors import (ServerError, LonelyError, WrongPasswordError, + KeyFormatError, OnlyOneCodeError, _UnknownPhaseError, + WelcomeError) +from .util import bytes_to_dict + +@attrs +@implementer(_interfaces.IBoss) +class Boss(object): + _W = attrib() + _side = attrib(validator=instance_of(type(u""))) + _url = attrib(validator=instance_of(type(u""))) + _appid = attrib(validator=instance_of(type(u""))) + _versions = attrib(validator=instance_of(dict)) + _welcome_handler = attrib() # TODO: validator: callable + _reactor = attrib() + _journal = attrib(validator=provides(_interfaces.IJournal)) + _tor_manager = attrib() # TODO: ITorManager or None + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __attrs_post_init__(self): + self._build_workers() + self._init_other_state() + + def _build_workers(self): + self._N = Nameplate() + self._M = Mailbox(self._side) + self._S = Send(self._side, self._timing) + self._O = Order(self._side, self._timing) + self._K = Key(self._appid, self._versions, self._side, self._timing) + self._R = Receive(self._side, self._timing) + self._RC = RendezvousConnector(self._url, self._appid, self._side, + self._reactor, self._journal, + self._tor_manager, self._timing) + self._L = Lister(self._timing) + self._A = Allocator(self._timing) + self._I = Input(self._timing) + self._C = Code(self._timing) + self._T = Terminator() + + self._N.wire(self._M, self._I, self._RC, self._T) + self._M.wire(self._N, self._RC, self._O, self._T) + self._S.wire(self._M) + self._O.wire(self._K, self._R) + self._K.wire(self, self._M, self._R) + self._R.wire(self, self._S) + self._RC.wire(self, self._N, self._M, self._A, self._L, self._T) + self._L.wire(self._RC, self._I) + self._A.wire(self._RC, self._C) + self._I.wire(self._C, self._L) + self._C.wire(self, self._A, self._N, self._K, self._I) + self._T.wire(self, self._RC, self._N, self._M) + + def _init_other_state(self): + self._did_start_code = False + self._next_tx_phase = 0 + self._next_rx_phase = 0 + self._rx_phases = {} # phase -> plaintext + + self._result = "empty" + + # these methods are called from outside + def start(self): + self._RC.start() + + def _set_trace(self, client_name, which, file): + names = {"B": self, "N": self._N, "M": self._M, "S": self._S, + "O": self._O, "K": self._K, "SK": self._K._SK, "R": self._R, + "RC": self._RC, "L": self._L, "C": self._C, + "T": self._T} + for machine in which.split(): + def tracer(old_state, input, new_state, output, machine=machine): + if output is None: + if new_state: + print("%s.%s[%s].%s -> [%s]" % + (client_name, machine, old_state, input, + new_state), file=file) + else: + # the RendezvousConnector emits message events as if + # they were state transitions, except that old_state + # and new_state are empty strings. "input" is one of + # R.connected, R.rx(type phase+side), R.tx(type + # phase), R.lost . + print("%s.%s.%s" % (client_name, machine, input), + file=file) + else: + if new_state: + print(" %s.%s.%s()" % (client_name, machine, output), + file=file) + file.flush() + names[machine].set_trace(tracer) + + ## def serialize(self): + ## raise NotImplemented + + # and these are the state-machine transition functions, which don't take + # args + @m.state(initial=True) + def S0_empty(self): pass # pragma: no cover + @m.state() + def S1_lonely(self): pass # pragma: no cover + @m.state() + def S2_happy(self): pass # pragma: no cover + @m.state() + def S3_closing(self): pass # pragma: no cover + @m.state(terminal=True) + def S4_closed(self): pass # pragma: no cover + + # from the Wormhole + + # input/allocate/set_code are regular methods, not state-transition + # inputs. We expect them to be called just after initialization, while + # we're in the S0_empty state. You must call exactly one of them, and the + # call must happen while we're in S0_empty, which makes them good + # candiates for being a proper @m.input, but set_code() will immediately + # (reentrantly) cause self.got_code() to be fired, which is messy. These + # are all passthroughs to the Code machine, so one alternative would be + # to have Wormhole call Code.{input,allocate,set_code} instead, but that + # would require the Wormhole to be aware of Code (whereas right now + # Wormhole only knows about this Boss instance, and everything else is + # hidden away). + def input_code(self): + if self._did_start_code: + raise OnlyOneCodeError() + self._did_start_code = True + return self._C.input_code() + def allocate_code(self, code_length): + if self._did_start_code: + raise OnlyOneCodeError() + self._did_start_code = True + wl = PGPWordList() + self._C.allocate_code(code_length, wl) + def set_code(self, code): + if ' ' in code: + raise KeyFormatError("code (%s) contains spaces." % code) + if self._did_start_code: + raise OnlyOneCodeError() + self._did_start_code = True + self._C.set_code(code) + + @m.input() + def send(self, plaintext): pass + @m.input() + def close(self): pass + + # from RendezvousConnector: + # * "rx_welcome" is the Welcome message, which might signal an error, or + # our welcome_handler might signal one + # * "rx_error" is error message from the server (probably because of + # something we said badly, or due to CrowdedError) + # * "error" is when an exception happened while it tried to deliver + # something else + def rx_welcome(self, welcome): + try: + if "error" in welcome: + raise WelcomeError(welcome["error"]) + # TODO: it'd be nice to not call the handler when we're in + # S3_closing or S4_closed states. I tried to implement this with + # rx_Welcome as an @input, but in the error case I'd be + # delivering a new input (rx_error or something) while in the + # middle of processing the rx_welcome input, and I wasn't sure + # Automat would handle that correctly. + self._welcome_handler(welcome) # can raise WelcomeError too + except WelcomeError as welcome_error: + self.rx_unwelcome(welcome_error) + @m.input() + def rx_unwelcome(self, welcome_error): pass + @m.input() + def rx_error(self, errmsg, orig): pass + @m.input() + def error(self, err): pass + + # from Code (provoked by input/allocate/set_code) + @m.input() + def got_code(self, code): pass + + # Key sends (got_key, scared) + # Receive sends (got_message, happy, got_verifier, scared) + @m.input() + def happy(self): pass + @m.input() + def scared(self): pass + + def got_message(self, phase, plaintext): + assert isinstance(phase, type("")), type(phase) + assert isinstance(plaintext, type(b"")), type(plaintext) + if phase == "version": + self._got_version(plaintext) + elif re.search(r'^\d+$', phase): + self._got_phase(int(phase), plaintext) + else: + # Ignore unrecognized phases, for forwards-compatibility. Use + # log.err so tests will catch surprises. + log.err(_UnknownPhaseError("received unknown phase '%s'" % phase)) + @m.input() + def _got_version(self, plaintext): pass + @m.input() + def _got_phase(self, phase, plaintext): pass + @m.input() + def got_key(self, key): pass + @m.input() + def got_verifier(self, verifier): pass + + # Terminator sends closed + @m.input() + def closed(self): pass + + @m.output() + def do_got_code(self, code): + self._W.got_code(code) + @m.output() + def process_version(self, plaintext): + # most of this is wormhole-to-wormhole, ignored for now + # in the future, this is how Dilation is signalled + self._their_versions = bytes_to_dict(plaintext) + # but this part is app-to-app + app_versions = self._their_versions.get("app_versions", {}) + self._W.got_version(app_versions) + + @m.output() + def S_send(self, plaintext): + assert isinstance(plaintext, type(b"")), type(plaintext) + phase = self._next_tx_phase + self._next_tx_phase += 1 + self._S.send("%d" % phase, plaintext) + + @m.output() + def close_unwelcome(self, welcome_error): + #assert isinstance(err, WelcomeError) + self._result = welcome_error + self._T.close("unwelcome") + @m.output() + def close_error(self, errmsg, orig): + self._result = ServerError(errmsg) + self._T.close("errory") + @m.output() + def close_scared(self): + self._result = WrongPasswordError() + self._T.close("scary") + @m.output() + def close_lonely(self): + self._result = LonelyError() + self._T.close("lonely") + @m.output() + def close_happy(self): + self._result = "happy" + self._T.close("happy") + + @m.output() + def W_got_key(self, key): + self._W.got_key(key) + @m.output() + def W_got_verifier(self, verifier): + self._W.got_verifier(verifier) + @m.output() + def W_received(self, phase, plaintext): + assert isinstance(phase, six.integer_types), type(phase) + # we call Wormhole.received() in strict phase order, with no gaps + self._rx_phases[phase] = plaintext + while self._next_rx_phase in self._rx_phases: + self._W.received(self._rx_phases.pop(self._next_rx_phase)) + self._next_rx_phase += 1 + + @m.output() + def W_close_with_error(self, err): + self._result = err # exception + self._W.closed(self._result) + + @m.output() + def W_closed(self): + # result is either "happy" or a WormholeError of some sort + self._W.closed(self._result) + + S0_empty.upon(close, enter=S3_closing, outputs=[close_lonely]) + S0_empty.upon(send, enter=S0_empty, outputs=[S_send]) + S0_empty.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome]) + S0_empty.upon(got_code, enter=S1_lonely, outputs=[do_got_code]) + S0_empty.upon(rx_error, enter=S3_closing, outputs=[close_error]) + S0_empty.upon(error, enter=S4_closed, outputs=[W_close_with_error]) + + S1_lonely.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome]) + S1_lonely.upon(happy, enter=S2_happy, outputs=[]) + S1_lonely.upon(scared, enter=S3_closing, outputs=[close_scared]) + S1_lonely.upon(close, enter=S3_closing, outputs=[close_lonely]) + S1_lonely.upon(send, enter=S1_lonely, outputs=[S_send]) + S1_lonely.upon(got_key, enter=S1_lonely, outputs=[W_got_key]) + S1_lonely.upon(rx_error, enter=S3_closing, outputs=[close_error]) + S1_lonely.upon(error, enter=S4_closed, outputs=[W_close_with_error]) + + S2_happy.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome]) + S2_happy.upon(got_verifier, enter=S2_happy, outputs=[W_got_verifier]) + S2_happy.upon(_got_phase, enter=S2_happy, outputs=[W_received]) + S2_happy.upon(_got_version, enter=S2_happy, outputs=[process_version]) + S2_happy.upon(scared, enter=S3_closing, outputs=[close_scared]) + S2_happy.upon(close, enter=S3_closing, outputs=[close_happy]) + S2_happy.upon(send, enter=S2_happy, outputs=[S_send]) + S2_happy.upon(rx_error, enter=S3_closing, outputs=[close_error]) + S2_happy.upon(error, enter=S4_closed, outputs=[W_close_with_error]) + + S3_closing.upon(rx_unwelcome, enter=S3_closing, outputs=[]) + S3_closing.upon(rx_error, enter=S3_closing, outputs=[]) + S3_closing.upon(got_verifier, enter=S3_closing, outputs=[]) + S3_closing.upon(_got_phase, enter=S3_closing, outputs=[]) + S3_closing.upon(_got_version, enter=S3_closing, outputs=[]) + S3_closing.upon(happy, enter=S3_closing, outputs=[]) + S3_closing.upon(scared, enter=S3_closing, outputs=[]) + S3_closing.upon(close, enter=S3_closing, outputs=[]) + S3_closing.upon(send, enter=S3_closing, outputs=[]) + S3_closing.upon(closed, enter=S4_closed, outputs=[W_closed]) + S3_closing.upon(error, enter=S4_closed, outputs=[W_close_with_error]) + + S4_closed.upon(rx_unwelcome, enter=S4_closed, outputs=[]) + S4_closed.upon(got_verifier, enter=S4_closed, outputs=[]) + S4_closed.upon(_got_phase, enter=S4_closed, outputs=[]) + S4_closed.upon(_got_version, enter=S4_closed, outputs=[]) + S4_closed.upon(happy, enter=S4_closed, outputs=[]) + S4_closed.upon(scared, enter=S4_closed, outputs=[]) + S4_closed.upon(close, enter=S4_closed, outputs=[]) + S4_closed.upon(send, enter=S4_closed, outputs=[]) + S4_closed.upon(error, enter=S4_closed, outputs=[]) diff --git a/src/wormhole/_code.py b/src/wormhole/_code.py new file mode 100644 index 0000000..b2a9a20 --- /dev/null +++ b/src/wormhole/_code.py @@ -0,0 +1,90 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import provides +from automat import MethodicalMachine +from . import _interfaces + +def first(outputs): + return list(outputs)[0] + +@attrs +@implementer(_interfaces.ICode) +class Code(object): + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def wire(self, boss, allocator, nameplate, key, input): + self._B = _interfaces.IBoss(boss) + self._A = _interfaces.IAllocator(allocator) + self._N = _interfaces.INameplate(nameplate) + self._K = _interfaces.IKey(key) + self._I = _interfaces.IInput(input) + + @m.state(initial=True) + def S0_idle(self): pass # pragma: no cover + @m.state() + def S1_inputting_nameplate(self): pass # pragma: no cover + @m.state() + def S2_inputting_words(self): pass # pragma: no cover + @m.state() + def S3_allocating(self): pass # pragma: no cover + @m.state() + def S4_known(self): pass # pragma: no cover + + # from App + @m.input() + def allocate_code(self, length, wordlist): pass + @m.input() + def input_code(self): pass + @m.input() + def set_code(self, code): pass + + # from Allocator + @m.input() + def allocated(self, nameplate, code): pass + + # from Input + @m.input() + def got_nameplate(self, nameplate): pass + @m.input() + def finished_input(self, code): pass + + @m.output() + def do_set_code(self, code): + nameplate = code.split("-", 2)[0] + self._N.set_nameplate(nameplate) + self._B.got_code(code) + self._K.got_code(code) + + @m.output() + def do_start_input(self): + return self._I.start() + @m.output() + def do_middle_input(self, nameplate): + self._N.set_nameplate(nameplate) + @m.output() + def do_finish_input(self, code): + self._B.got_code(code) + self._K.got_code(code) + + @m.output() + def do_start_allocate(self, length, wordlist): + self._A.allocate(length, wordlist) + @m.output() + def do_finish_allocate(self, nameplate, code): + assert code.startswith(nameplate+"-"), (nameplate, code) + self._N.set_nameplate(nameplate) + self._B.got_code(code) + self._K.got_code(code) + + S0_idle.upon(set_code, enter=S4_known, outputs=[do_set_code]) + S0_idle.upon(input_code, enter=S1_inputting_nameplate, + outputs=[do_start_input], collector=first) + S1_inputting_nameplate.upon(got_nameplate, enter=S2_inputting_words, + outputs=[do_middle_input]) + S2_inputting_words.upon(finished_input, enter=S4_known, + outputs=[do_finish_input]) + S0_idle.upon(allocate_code, enter=S3_allocating, outputs=[do_start_allocate]) + S3_allocating.upon(allocated, enter=S4_known, outputs=[do_finish_allocate]) diff --git a/src/wormhole/_input.py b/src/wormhole/_input.py new file mode 100644 index 0000000..6985253 --- /dev/null +++ b/src/wormhole/_input.py @@ -0,0 +1,240 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import provides +from twisted.internet import defer +from automat import MethodicalMachine +from . import _interfaces, errors + +def first(outputs): + return list(outputs)[0] + +@attrs +@implementer(_interfaces.IInput) +class Input(object): + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __attrs_post_init__(self): + self._all_nameplates = set() + self._nameplate = None + self._wordlist = None + self._wordlist_waiters = [] + + def wire(self, code, lister): + self._C = _interfaces.ICode(code) + self._L = _interfaces.ILister(lister) + + def when_wordlist_is_available(self): + if self._wordlist: + return defer.succeed(None) + d = defer.Deferred() + self._wordlist_waiters.append(d) + return d + + @m.state(initial=True) + def S0_idle(self): pass # pragma: no cover + @m.state() + def S1_typing_nameplate(self): pass # pragma: no cover + @m.state() + def S2_typing_code_no_wordlist(self): pass # pragma: no cover + @m.state() + def S3_typing_code_yes_wordlist(self): pass # pragma: no cover + @m.state(terminal=True) + def S4_done(self): pass # pragma: no cover + + # from Code + @m.input() + def start(self): pass + + # from Lister + @m.input() + def got_nameplates(self, all_nameplates): pass + + # from Nameplate + @m.input() + def got_wordlist(self, wordlist): pass + + # API provided to app as ICodeInputHelper + @m.input() + def refresh_nameplates(self): pass + @m.input() + def get_nameplate_completions(self, prefix): pass + @m.input() + def choose_nameplate(self, nameplate): pass + @m.input() + def get_word_completions(self, prefix): pass + @m.input() + def choose_words(self, words): pass + + @m.output() + def do_start(self): + self._L.refresh() + return Helper(self) + @m.output() + def do_refresh(self): + self._L.refresh() + @m.output() + def record_nameplates(self, all_nameplates): + # we get a set of nameplate id strings + self._all_nameplates = all_nameplates + @m.output() + def _get_nameplate_completions(self, prefix): + completions = set() + for nameplate in self._all_nameplates: + if nameplate.startswith(prefix): + # TODO: it's a little weird that Input is responsible for the + # hyphen on nameplates, but WordList owns it for words + completions.add(nameplate+"-") + return completions + @m.output() + def record_all_nameplates(self, nameplate): + self._nameplate = nameplate + self._C.got_nameplate(nameplate) + @m.output() + def record_wordlist(self, wordlist): + from ._rlcompleter import debug + debug(" -record_wordlist") + self._wordlist = wordlist + @m.output() + def notify_wordlist_waiters(self, wordlist): + while self._wordlist_waiters: + d = self._wordlist_waiters.pop() + d.callback(None) + + @m.output() + def no_word_completions(self, prefix): + return set() + @m.output() + def _get_word_completions(self, prefix): + assert self._wordlist + return self._wordlist.get_completions(prefix) + + @m.output() + def raise_must_choose_nameplate1(self, prefix): + raise errors.MustChooseNameplateFirstError() + @m.output() + def raise_must_choose_nameplate2(self, words): + raise errors.MustChooseNameplateFirstError() + @m.output() + def raise_already_chose_nameplate1(self): + raise errors.AlreadyChoseNameplateError() + @m.output() + def raise_already_chose_nameplate2(self, prefix): + raise errors.AlreadyChoseNameplateError() + @m.output() + def raise_already_chose_nameplate3(self, nameplate): + raise errors.AlreadyChoseNameplateError() + @m.output() + def raise_already_chose_words1(self, prefix): + raise errors.AlreadyChoseWordsError() + @m.output() + def raise_already_chose_words2(self, words): + raise errors.AlreadyChoseWordsError() + + @m.output() + def do_words(self, words): + code = self._nameplate + "-" + words + self._C.finished_input(code) + + S0_idle.upon(start, enter=S1_typing_nameplate, + outputs=[do_start], collector=first) + # wormholes that don't use input_code (i.e. they use allocate_code or + # generate_code) will never start() us, but Nameplate will give us a + # wordlist anyways (as soon as the nameplate is claimed), so handle it. + S0_idle.upon(got_wordlist, enter=S0_idle, outputs=[record_wordlist, + notify_wordlist_waiters]) + S1_typing_nameplate.upon(got_nameplates, enter=S1_typing_nameplate, + outputs=[record_nameplates]) + # but wormholes that *do* use input_code should not get got_wordlist + # until after we tell Code that we got_nameplate, which is the earliest + # it can be claimed + S1_typing_nameplate.upon(refresh_nameplates, enter=S1_typing_nameplate, + outputs=[do_refresh]) + S1_typing_nameplate.upon(get_nameplate_completions, + enter=S1_typing_nameplate, + outputs=[_get_nameplate_completions], + collector=first) + S1_typing_nameplate.upon(choose_nameplate, enter=S2_typing_code_no_wordlist, + outputs=[record_all_nameplates]) + S1_typing_nameplate.upon(get_word_completions, + enter=S1_typing_nameplate, + outputs=[raise_must_choose_nameplate1]) + S1_typing_nameplate.upon(choose_words, enter=S1_typing_nameplate, + outputs=[raise_must_choose_nameplate2]) + + S2_typing_code_no_wordlist.upon(got_nameplates, + enter=S2_typing_code_no_wordlist, outputs=[]) + S2_typing_code_no_wordlist.upon(got_wordlist, + enter=S3_typing_code_yes_wordlist, + outputs=[record_wordlist, + notify_wordlist_waiters]) + S2_typing_code_no_wordlist.upon(refresh_nameplates, + enter=S2_typing_code_no_wordlist, + outputs=[raise_already_chose_nameplate1]) + S2_typing_code_no_wordlist.upon(get_nameplate_completions, + enter=S2_typing_code_no_wordlist, + outputs=[raise_already_chose_nameplate2]) + S2_typing_code_no_wordlist.upon(choose_nameplate, + enter=S2_typing_code_no_wordlist, + outputs=[raise_already_chose_nameplate3]) + S2_typing_code_no_wordlist.upon(get_word_completions, + enter=S2_typing_code_no_wordlist, + outputs=[no_word_completions], + collector=first) + S2_typing_code_no_wordlist.upon(choose_words, enter=S4_done, + outputs=[do_words]) + + S3_typing_code_yes_wordlist.upon(got_nameplates, + enter=S3_typing_code_yes_wordlist, + outputs=[]) + # got_wordlist: should never happen + S3_typing_code_yes_wordlist.upon(refresh_nameplates, + enter=S3_typing_code_yes_wordlist, + outputs=[raise_already_chose_nameplate1]) + S3_typing_code_yes_wordlist.upon(get_nameplate_completions, + enter=S3_typing_code_yes_wordlist, + outputs=[raise_already_chose_nameplate2]) + S3_typing_code_yes_wordlist.upon(choose_nameplate, + enter=S3_typing_code_yes_wordlist, + outputs=[raise_already_chose_nameplate3]) + S3_typing_code_yes_wordlist.upon(get_word_completions, + enter=S3_typing_code_yes_wordlist, + outputs=[_get_word_completions], + collector=first) + S3_typing_code_yes_wordlist.upon(choose_words, enter=S4_done, + outputs=[do_words]) + + S4_done.upon(got_nameplates, enter=S4_done, outputs=[]) + S4_done.upon(got_wordlist, enter=S4_done, outputs=[]) + S4_done.upon(refresh_nameplates, + enter=S4_done, + outputs=[raise_already_chose_nameplate1]) + S4_done.upon(get_nameplate_completions, + enter=S4_done, + outputs=[raise_already_chose_nameplate2]) + S4_done.upon(choose_nameplate, enter=S4_done, + outputs=[raise_already_chose_nameplate3]) + S4_done.upon(get_word_completions, enter=S4_done, + outputs=[raise_already_chose_words1]) + S4_done.upon(choose_words, enter=S4_done, + outputs=[raise_already_chose_words2]) + +# we only expose the Helper to application code, not _Input +@attrs +class Helper(object): + _input = attrib() + + def refresh_nameplates(self): + self._input.refresh_nameplates() + def get_nameplate_completions(self, prefix): + return self._input.get_nameplate_completions(prefix) + def choose_nameplate(self, nameplate): + self._input.choose_nameplate(nameplate) + def when_wordlist_is_available(self): + return self._input.when_wordlist_is_available() + def get_word_completions(self, prefix): + return self._input.get_word_completions(prefix) + def choose_words(self, words): + self._input.choose_words(words) diff --git a/src/wormhole/_interfaces.py b/src/wormhole/_interfaces.py new file mode 100644 index 0000000..11b562a --- /dev/null +++ b/src/wormhole/_interfaces.py @@ -0,0 +1,45 @@ +from zope.interface import Interface + +class IWormhole(Interface): + pass +class IBoss(Interface): + pass +class INameplate(Interface): + pass +class IMailbox(Interface): + pass +class ISend(Interface): + pass +class IOrder(Interface): + pass +class IKey(Interface): + pass +class IReceive(Interface): + pass +class IRendezvousConnector(Interface): + pass +class ILister(Interface): + pass +class ICode(Interface): + pass +class IInput(Interface): + pass +class IAllocator(Interface): + pass +class ITerminator(Interface): + pass + +class ITiming(Interface): + pass +class ITorManager(Interface): + pass +class IWordlist(Interface): + def choose_words(length): + """Randomly select LENGTH words, join them with hyphens, return the + result.""" + def get_completions(prefix): + """Return a list of all suffixes that could complete the given + prefix.""" + +class IJournal(Interface): # TODO: this needs to be public + pass diff --git a/src/wormhole/_key.py b/src/wormhole/_key.py new file mode 100644 index 0000000..91c972f --- /dev/null +++ b/src/wormhole/_key.py @@ -0,0 +1,178 @@ +from __future__ import print_function, absolute_import, unicode_literals +from hashlib import sha256 +import six +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import provides, instance_of +from spake2 import SPAKE2_Symmetric +from hkdf import Hkdf +from nacl.secret import SecretBox +from nacl.exceptions import CryptoError +from nacl import utils +from automat import MethodicalMachine +from .util import (to_bytes, bytes_to_hexstr, hexstr_to_bytes, + bytes_to_dict, dict_to_bytes) +from . import _interfaces +CryptoError +__all__ = ["derive_key", "derive_phase_key", "CryptoError", + "Key"] + +def HKDF(skm, outlen, salt=None, CTXinfo=b""): + return Hkdf(salt, skm).expand(CTXinfo, outlen) + +def derive_key(key, purpose, length=SecretBox.KEY_SIZE): + if not isinstance(key, type(b"")): raise TypeError(type(key)) + if not isinstance(purpose, type(b"")): raise TypeError(type(purpose)) + if not isinstance(length, six.integer_types): raise TypeError(type(length)) + return HKDF(key, length, CTXinfo=purpose) + +def derive_phase_key(key, side, phase): + assert isinstance(side, type("")), type(side) + assert isinstance(phase, type("")), type(phase) + side_bytes = side.encode("ascii") + phase_bytes = phase.encode("ascii") + purpose = (b"wormhole:phase:" + + sha256(side_bytes).digest() + + sha256(phase_bytes).digest()) + return derive_key(key, purpose) + +def decrypt_data(key, encrypted): + assert isinstance(key, type(b"")), type(key) + assert isinstance(encrypted, type(b"")), type(encrypted) + assert len(key) == SecretBox.KEY_SIZE, len(key) + box = SecretBox(key) + data = box.decrypt(encrypted) + return data + +def encrypt_data(key, plaintext): + assert isinstance(key, type(b"")), type(key) + assert isinstance(plaintext, type(b"")), type(plaintext) + assert len(key) == SecretBox.KEY_SIZE, len(key) + box = SecretBox(key) + nonce = utils.random(SecretBox.NONCE_SIZE) + return box.encrypt(plaintext, nonce) + +# the Key we expose to callers (Boss, Ordering) is responsible for sorting +# the two messages (got_code and got_pake), then delivering them to +# _SortedKey in the right order. + +@attrs +@implementer(_interfaces.IKey) +class Key(object): + _appid = attrib(validator=instance_of(type(u""))) + _versions = attrib(validator=instance_of(dict)) + _side = attrib(validator=instance_of(type(u""))) + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __attrs_post_init__(self): + self._SK = _SortedKey(self._appid, self._versions, self._side, + self._timing) + self._debug_pake_stashed = False # for tests + + def wire(self, boss, mailbox, receive): + self._SK.wire(boss, mailbox, receive) + + @m.state(initial=True) + def S00(self): pass # pragma: no cover + @m.state() + def S01(self): pass # pragma: no cover + @m.state() + def S10(self): pass # pragma: no cover + @m.state() + def S11(self): pass # pragma: no cover + + @m.input() + def got_code(self, code): pass + @m.input() + def got_pake(self, body): pass + + @m.output() + def stash_pake(self, body): + self._pake = body + self._debug_pake_stashed = True + @m.output() + def deliver_code(self, code): + self._SK.got_code(code) + @m.output() + def deliver_pake(self, body): + self._SK.got_pake(body) + @m.output() + def deliver_code_and_stashed_pake(self, code): + self._SK.got_code(code) + self._SK.got_pake(self._pake) + + S00.upon(got_code, enter=S10, outputs=[deliver_code]) + S10.upon(got_pake, enter=S11, outputs=[deliver_pake]) + S00.upon(got_pake, enter=S01, outputs=[stash_pake]) + S01.upon(got_code, enter=S11, outputs=[deliver_code_and_stashed_pake]) + +@attrs +class _SortedKey(object): + _appid = attrib(validator=instance_of(type(u""))) + _versions = attrib(validator=instance_of(dict)) + _side = attrib(validator=instance_of(type(u""))) + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def wire(self, boss, mailbox, receive): + self._B = _interfaces.IBoss(boss) + self._M = _interfaces.IMailbox(mailbox) + self._R = _interfaces.IReceive(receive) + + @m.state(initial=True) + def S0_know_nothing(self): pass # pragma: no cover + @m.state() + def S1_know_code(self): pass # pragma: no cover + @m.state() + def S2_know_key(self): pass # pragma: no cover + @m.state(terminal=True) + def S3_scared(self): pass # pragma: no cover + + # from Boss + @m.input() + def got_code(self, code): pass + + # from Ordering + def got_pake(self, body): + assert isinstance(body, type(b"")), type(body) + payload = bytes_to_dict(body) + if "pake_v1" in payload: + self.got_pake_good(hexstr_to_bytes(payload["pake_v1"])) + else: + self.got_pake_bad() + @m.input() + def got_pake_good(self, msg2): pass + @m.input() + def got_pake_bad(self): pass + + @m.output() + def build_pake(self, code): + with self._timing.add("pake1", waiting="crypto"): + self._sp = SPAKE2_Symmetric(to_bytes(code), + idSymmetric=to_bytes(self._appid)) + msg1 = self._sp.start() + body = dict_to_bytes({"pake_v1": bytes_to_hexstr(msg1)}) + self._M.add_message("pake", body) + + @m.output() + def scared(self): + self._B.scared() + @m.output() + def compute_key(self, msg2): + assert isinstance(msg2, type(b"")) + with self._timing.add("pake2", waiting="crypto"): + key = self._sp.finish(msg2) + self._B.got_key(key) + phase = "version" + data_key = derive_phase_key(key, self._side, phase) + plaintext = dict_to_bytes(self._versions) + encrypted = encrypt_data(data_key, plaintext) + self._M.add_message(phase, encrypted) + self._R.got_key(key) + + S0_know_nothing.upon(got_code, enter=S1_know_code, outputs=[build_pake]) + S1_know_code.upon(got_pake_good, enter=S2_know_key, outputs=[compute_key]) + S1_know_code.upon(got_pake_bad, enter=S3_scared, outputs=[scared]) diff --git a/src/wormhole/_lister.py b/src/wormhole/_lister.py new file mode 100644 index 0000000..cd1a560 --- /dev/null +++ b/src/wormhole/_lister.py @@ -0,0 +1,73 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import provides +from automat import MethodicalMachine +from . import _interfaces + +@attrs +@implementer(_interfaces.ILister) +class Lister(object): + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def wire(self, rendezvous_connector, input): + self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) + self._I = _interfaces.IInput(input) + + # Ideally, each API request would spawn a new "list_nameplates" message + # to the server, so the response would be maximally fresh, but that would + # require correlating server request+response messages, and the protocol + # is intended to be less stateful than that. So we offer a weaker + # freshness property: if no server requests are in flight, then a new API + # request will provoke a new server request, and the result will be + # fresh. But if a server request is already in flight when a second API + # request arrives, both requests will be satisfied by the same response. + + @m.state(initial=True) + def S0A_idle_disconnected(self): pass # pragma: no cover + @m.state() + def S1A_wanting_disconnected(self): pass # pragma: no cover + @m.state() + def S0B_idle_connected(self): pass # pragma: no cover + @m.state() + def S1B_wanting_connected(self): pass # pragma: no cover + + @m.input() + def connected(self): pass + @m.input() + def lost(self): pass + @m.input() + def refresh(self): pass + @m.input() + def rx_nameplates(self, all_nameplates): pass + + @m.output() + def RC_tx_list(self): + self._RC.tx_list() + @m.output() + def I_got_nameplates(self, all_nameplates): + # We get a set of nameplate ids. There may be more attributes in the + # future: change RendezvousConnector._response_handle_nameplates to + # get them + self._I.got_nameplates(all_nameplates) + + S0A_idle_disconnected.upon(connected, enter=S0B_idle_connected, outputs=[]) + S0B_idle_connected.upon(lost, enter=S0A_idle_disconnected, outputs=[]) + + S0A_idle_disconnected.upon(refresh, + enter=S1A_wanting_disconnected, outputs=[]) + S1A_wanting_disconnected.upon(refresh, + enter=S1A_wanting_disconnected, outputs=[]) + S1A_wanting_disconnected.upon(connected, enter=S1B_wanting_connected, + outputs=[RC_tx_list]) + S0B_idle_connected.upon(refresh, enter=S1B_wanting_connected, + outputs=[RC_tx_list]) + S0B_idle_connected.upon(rx_nameplates, enter=S0B_idle_connected, + outputs=[I_got_nameplates]) + S1B_wanting_connected.upon(lost, enter=S1A_wanting_disconnected, outputs=[]) + S1B_wanting_connected.upon(refresh, enter=S1B_wanting_connected, + outputs=[RC_tx_list]) + S1B_wanting_connected.upon(rx_nameplates, enter=S0B_idle_connected, + outputs=[I_got_nameplates]) diff --git a/src/wormhole/_mailbox.py b/src/wormhole/_mailbox.py new file mode 100644 index 0000000..3bca6fb --- /dev/null +++ b/src/wormhole/_mailbox.py @@ -0,0 +1,195 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import instance_of +from automat import MethodicalMachine +from . import _interfaces + +@attrs +@implementer(_interfaces.IMailbox) +class Mailbox(object): + _side = attrib(validator=instance_of(type(u""))) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __attrs_post_init__(self): + self._mailbox = None + self._pending_outbound = {} + self._processed = set() + + def wire(self, nameplate, rendezvous_connector, ordering, terminator): + self._N = _interfaces.INameplate(nameplate) + self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) + self._O = _interfaces.IOrder(ordering) + self._T = _interfaces.ITerminator(terminator) + + # all -A states: not connected + # all -B states: yes connected + # B states serialize as A, so they deserialize as unconnected + + # S0: know nothing + @m.state(initial=True) + def S0A(self): pass # pragma: no cover + @m.state() + def S0B(self): pass # pragma: no cover + + # S1: mailbox known, not opened + @m.state() + def S1A(self): pass # pragma: no cover + + # S2: mailbox known, opened + # We've definitely tried to open the mailbox at least once, but it must + # be re-opened with each connection, because open() is also subscribe() + @m.state() + def S2A(self): pass # pragma: no cover + @m.state() + def S2B(self): pass # pragma: no cover + + # S3: closing + @m.state() + def S3A(self): pass # pragma: no cover + @m.state() + def S3B(self): pass # pragma: no cover + + # S4: closed. We no longer care whether we're connected or not + #@m.state() + #def S4A(self): pass + #@m.state() + #def S4B(self): pass + @m.state(terminal=True) + def S4(self): pass # pragma: no cover + S4A = S4 + S4B = S4 + + + # from Terminator + @m.input() + def close(self, mood): pass + + # from Nameplate + @m.input() + def got_mailbox(self, mailbox): pass + + # from RendezvousConnector + @m.input() + def connected(self): pass + @m.input() + def lost(self): pass + + def rx_message(self, side, phase, body): + assert isinstance(side, type("")), type(side) + assert isinstance(phase, type("")), type(phase) + assert isinstance(body, type(b"")), type(body) + if side == self._side: + self.rx_message_ours(phase, body) + else: + self.rx_message_theirs(side, phase, body) + @m.input() + def rx_message_ours(self, phase, body): pass + @m.input() + def rx_message_theirs(self, side, phase, body): pass + @m.input() + def rx_closed(self): pass + + # from Send or Key + @m.input() + def add_message(self, phase, body): + pass + + + @m.output() + def record_mailbox(self, mailbox): + self._mailbox = mailbox + @m.output() + def RC_tx_open(self): + assert self._mailbox + self._RC.tx_open(self._mailbox) + @m.output() + def queue(self, phase, body): + assert isinstance(phase, type("")), type(phase) + assert isinstance(body, type(b"")), (type(body), phase, body) + self._pending_outbound[phase] = body + @m.output() + def record_mailbox_and_RC_tx_open_and_drain(self, mailbox): + self._mailbox = mailbox + self._RC.tx_open(mailbox) + self._drain() + @m.output() + def drain(self): + self._drain() + def _drain(self): + for phase, body in self._pending_outbound.items(): + self._RC.tx_add(phase, body) + @m.output() + def RC_tx_add(self, phase, body): + assert isinstance(phase, type("")), type(phase) + assert isinstance(body, type(b"")), type(body) + self._RC.tx_add(phase, body) + @m.output() + def N_release_and_accept(self, side, phase, body): + self._N.release() + if phase not in self._processed: + self._processed.add(phase) + self._O.got_message(side, phase, body) + @m.output() + def RC_tx_close(self): + assert self._mood + self._RC_tx_close() + def _RC_tx_close(self): + self._RC.tx_close(self._mailbox, self._mood) + + @m.output() + def dequeue(self, phase, body): + self._pending_outbound.pop(phase, None) + @m.output() + def record_mood(self, mood): + self._mood = mood + @m.output() + def record_mood_and_RC_tx_close(self, mood): + self._mood = mood + self._RC_tx_close() + @m.output() + def ignore_mood_and_T_mailbox_done(self, mood): + self._T.mailbox_done() + @m.output() + def T_mailbox_done(self): + self._T.mailbox_done() + + S0A.upon(connected, enter=S0B, outputs=[]) + S0A.upon(got_mailbox, enter=S1A, outputs=[record_mailbox]) + S0A.upon(add_message, enter=S0A, outputs=[queue]) + S0A.upon(close, enter=S4A, outputs=[ignore_mood_and_T_mailbox_done]) + S0B.upon(lost, enter=S0A, outputs=[]) + S0B.upon(add_message, enter=S0B, outputs=[queue]) + S0B.upon(close, enter=S4B, outputs=[ignore_mood_and_T_mailbox_done]) + S0B.upon(got_mailbox, enter=S2B, + outputs=[record_mailbox_and_RC_tx_open_and_drain]) + + S1A.upon(connected, enter=S2B, outputs=[RC_tx_open, drain]) + S1A.upon(add_message, enter=S1A, outputs=[queue]) + S1A.upon(close, enter=S4A, outputs=[ignore_mood_and_T_mailbox_done]) + + S2A.upon(connected, enter=S2B, outputs=[RC_tx_open, drain]) + S2A.upon(add_message, enter=S2A, outputs=[queue]) + S2A.upon(close, enter=S3A, outputs=[record_mood]) + S2B.upon(lost, enter=S2A, outputs=[]) + S2B.upon(add_message, enter=S2B, outputs=[queue, RC_tx_add]) + S2B.upon(rx_message_theirs, enter=S2B, outputs=[N_release_and_accept]) + S2B.upon(rx_message_ours, enter=S2B, outputs=[dequeue]) + S2B.upon(close, enter=S3B, outputs=[record_mood_and_RC_tx_close]) + + S3A.upon(connected, enter=S3B, outputs=[RC_tx_close]) + S3B.upon(lost, enter=S3A, outputs=[]) + S3B.upon(rx_closed, enter=S4B, outputs=[T_mailbox_done]) + S3B.upon(add_message, enter=S3B, outputs=[]) + S3B.upon(rx_message_theirs, enter=S3B, outputs=[]) + S3B.upon(rx_message_ours, enter=S3B, outputs=[]) + S3B.upon(close, enter=S3B, outputs=[]) + + S4A.upon(connected, enter=S4B, outputs=[]) + S4B.upon(lost, enter=S4A, outputs=[]) + S4.upon(add_message, enter=S4, outputs=[]) + S4.upon(rx_message_theirs, enter=S4, outputs=[]) + S4.upon(rx_message_ours, enter=S4, outputs=[]) + S4.upon(close, enter=S4, outputs=[]) + diff --git a/src/wormhole/_nameplate.py b/src/wormhole/_nameplate.py new file mode 100644 index 0000000..8ee8025 --- /dev/null +++ b/src/wormhole/_nameplate.py @@ -0,0 +1,153 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from automat import MethodicalMachine +from . import _interfaces +from ._wordlist import PGPWordList + +@implementer(_interfaces.INameplate) +class Nameplate(object): + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __init__(self): + self._nameplate = None + + def wire(self, mailbox, input, rendezvous_connector, terminator): + self._M = _interfaces.IMailbox(mailbox) + self._I = _interfaces.IInput(input) + self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) + self._T = _interfaces.ITerminator(terminator) + + # all -A states: not connected + # all -B states: yes connected + # B states serialize as A, so they deserialize as unconnected + + # S0: know nothing + @m.state(initial=True) + def S0A(self): pass # pragma: no cover + @m.state() + def S0B(self): pass # pragma: no cover + + # S1: nameplate known, never claimed + @m.state() + def S1A(self): pass # pragma: no cover + + # S2: nameplate known, maybe claimed + @m.state() + def S2A(self): pass # pragma: no cover + @m.state() + def S2B(self): pass # pragma: no cover + + # S3: nameplate claimed + @m.state() + def S3A(self): pass # pragma: no cover + @m.state() + def S3B(self): pass # pragma: no cover + + # S4: maybe released + @m.state() + def S4A(self): pass # pragma: no cover + @m.state() + def S4B(self): pass # pragma: no cover + + # S5: released + # we no longer care whether we're connected or not + #@m.state() + #def S5A(self): pass + #@m.state() + #def S5B(self): pass + @m.state() + def S5(self): pass # pragma: no cover + S5A = S5 + S5B = S5 + + # from Boss + @m.input() + def set_nameplate(self, nameplate): pass + + # from Mailbox + @m.input() + def release(self): pass + + # from Terminator + @m.input() + def close(self): pass + + # from RendezvousConnector + @m.input() + def connected(self): pass + @m.input() + def lost(self): pass + + @m.input() + def rx_claimed(self, mailbox): pass + @m.input() + def rx_released(self): pass + + + @m.output() + def record_nameplate(self, nameplate): + self._nameplate = nameplate + @m.output() + def record_nameplate_and_RC_tx_claim(self, nameplate): + self._nameplate = nameplate + self._RC.tx_claim(self._nameplate) + @m.output() + def RC_tx_claim(self): + # when invoked via M.connected(), we must use the stored nameplate + self._RC.tx_claim(self._nameplate) + @m.output() + def I_got_wordlist(self, mailbox): + # TODO select wordlist based on nameplate properties, in rx_claimed + wordlist = PGPWordList() + self._I.got_wordlist(wordlist) + @m.output() + def M_got_mailbox(self, mailbox): + self._M.got_mailbox(mailbox) + @m.output() + def RC_tx_release(self): + assert self._nameplate + self._RC.tx_release(self._nameplate) + @m.output() + def T_nameplate_done(self): + self._T.nameplate_done() + + S0A.upon(set_nameplate, enter=S1A, outputs=[record_nameplate]) + S0A.upon(connected, enter=S0B, outputs=[]) + S0A.upon(close, enter=S5A, outputs=[T_nameplate_done]) + S0B.upon(set_nameplate, enter=S2B, + outputs=[record_nameplate_and_RC_tx_claim]) + S0B.upon(lost, enter=S0A, outputs=[]) + S0B.upon(close, enter=S5A, outputs=[T_nameplate_done]) + + S1A.upon(connected, enter=S2B, outputs=[RC_tx_claim]) + S1A.upon(close, enter=S5A, outputs=[T_nameplate_done]) + + S2A.upon(connected, enter=S2B, outputs=[RC_tx_claim]) + S2A.upon(close, enter=S4A, outputs=[]) + S2B.upon(lost, enter=S2A, outputs=[]) + S2B.upon(rx_claimed, enter=S3B, outputs=[I_got_wordlist, M_got_mailbox]) + S2B.upon(close, enter=S4B, outputs=[RC_tx_release]) + + S3A.upon(connected, enter=S3B, outputs=[]) + S3A.upon(close, enter=S4A, outputs=[]) + S3B.upon(lost, enter=S3A, outputs=[]) + #S3B.upon(rx_claimed, enter=S3B, outputs=[]) # shouldn't happen + S3B.upon(release, enter=S4B, outputs=[RC_tx_release]) + S3B.upon(close, enter=S4B, outputs=[RC_tx_release]) + + S4A.upon(connected, enter=S4B, outputs=[RC_tx_release]) + S4A.upon(close, enter=S4A, outputs=[]) + S4B.upon(lost, enter=S4A, outputs=[]) + S4B.upon(rx_claimed, enter=S4B, outputs=[]) + S4B.upon(rx_released, enter=S5B, outputs=[T_nameplate_done]) + S4B.upon(release, enter=S4B, outputs=[]) # mailbox is lazy + # Mailbox doesn't remember how many times it's sent a release, and will + # re-send a new one for each peer message it receives. Ignoring it here + # is easier than adding a new pair of states to Mailbox. + S4B.upon(close, enter=S4B, outputs=[]) + + S5A.upon(connected, enter=S5B, outputs=[]) + S5B.upon(lost, enter=S5A, outputs=[]) + S5.upon(release, enter=S5, outputs=[]) # mailbox is lazy + S5.upon(close, enter=S5, outputs=[]) diff --git a/src/wormhole/_order.py b/src/wormhole/_order.py new file mode 100644 index 0000000..5383a14 --- /dev/null +++ b/src/wormhole/_order.py @@ -0,0 +1,68 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import provides, instance_of +from automat import MethodicalMachine +from . import _interfaces + +@attrs +@implementer(_interfaces.IOrder) +class Order(object): + _side = attrib(validator=instance_of(type(u""))) + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __attrs_post_init__(self): + self._key = None + self._queue = [] + def wire(self, key, receive): + self._K = _interfaces.IKey(key) + self._R = _interfaces.IReceive(receive) + + @m.state(initial=True) + def S0_no_pake(self): pass # pragma: no cover + @m.state(terminal=True) + def S1_yes_pake(self): pass # pragma: no cover + + def got_message(self, side, phase, body): + #print("ORDER[%s].got_message(%s)" % (self._side, phase)) + assert isinstance(side, type("")), type(phase) + assert isinstance(phase, type("")), type(phase) + assert isinstance(body, type(b"")), type(body) + if phase == "pake": + self.got_pake(side, phase, body) + else: + self.got_non_pake(side, phase, body) + + @m.input() + def got_pake(self, side, phase, body): pass + @m.input() + def got_non_pake(self, side, phase, body): pass + + @m.output() + def queue(self, side, phase, body): + assert isinstance(side, type("")), type(phase) + assert isinstance(phase, type("")), type(phase) + assert isinstance(body, type(b"")), type(body) + self._queue.append((side, phase, body)) + @m.output() + def notify_key(self, side, phase, body): + self._K.got_pake(body) + @m.output() + def drain(self, side, phase, body): + del phase + del body + for (side, phase, body) in self._queue: + self._deliver(side, phase, body) + self._queue[:] = [] + @m.output() + def deliver(self, side, phase, body): + self._deliver(side, phase, body) + + def _deliver(self, side, phase, body): + self._R.got_message(side, phase, body) + + S0_no_pake.upon(got_non_pake, enter=S0_no_pake, outputs=[queue]) + S0_no_pake.upon(got_pake, enter=S1_yes_pake, outputs=[notify_key, drain]) + S1_yes_pake.upon(got_non_pake, enter=S1_yes_pake, outputs=[deliver]) diff --git a/src/wormhole/_receive.py b/src/wormhole/_receive.py new file mode 100644 index 0000000..7003110 --- /dev/null +++ b/src/wormhole/_receive.py @@ -0,0 +1,89 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from attr import attrs, attrib +from attr.validators import provides, instance_of +from automat import MethodicalMachine +from . import _interfaces +from ._key import derive_key, derive_phase_key, decrypt_data, CryptoError + +@attrs +@implementer(_interfaces.IReceive) +class Receive(object): + _side = attrib(validator=instance_of(type(u""))) + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __attrs_post_init__(self): + self._key = None + + def wire(self, boss, send): + self._B = _interfaces.IBoss(boss) + self._S = _interfaces.ISend(send) + + @m.state(initial=True) + def S0_unknown_key(self): pass # pragma: no cover + @m.state() + def S1_unverified_key(self): pass # pragma: no cover + @m.state() + def S2_verified_key(self): pass # pragma: no cover + @m.state(terminal=True) + def S3_scared(self): pass # pragma: no cover + + # from Ordering + def got_message(self, side, phase, body): + assert isinstance(side, type("")), type(phase) + assert isinstance(phase, type("")), type(phase) + assert isinstance(body, type(b"")), type(body) + assert self._key + data_key = derive_phase_key(self._key, side, phase) + try: + plaintext = decrypt_data(data_key, body) + except CryptoError: + self.got_message_bad() + return + self.got_message_good(phase, plaintext) + @m.input() + def got_message_good(self, phase, plaintext): pass + @m.input() + def got_message_bad(self): pass + + # from Key + @m.input() + def got_key(self, key): pass + + @m.output() + def record_key(self, key): + self._key = key + @m.output() + def S_got_verified_key(self, phase, plaintext): + assert self._key + self._S.got_verified_key(self._key) + @m.output() + def W_happy(self, phase, plaintext): + self._B.happy() + @m.output() + def W_got_verifier(self, phase, plaintext): + self._B.got_verifier(derive_key(self._key, b"wormhole:verifier")) + @m.output() + def W_got_message(self, phase, plaintext): + assert isinstance(phase, type("")), type(phase) + assert isinstance(plaintext, type(b"")), type(plaintext) + self._B.got_message(phase, plaintext) + @m.output() + def W_scared(self): + self._B.scared() + + S0_unknown_key.upon(got_key, enter=S1_unverified_key, outputs=[record_key]) + S1_unverified_key.upon(got_message_good, enter=S2_verified_key, + outputs=[S_got_verified_key, + W_happy, W_got_verifier, W_got_message]) + S1_unverified_key.upon(got_message_bad, enter=S3_scared, + outputs=[W_scared]) + S2_verified_key.upon(got_message_bad, enter=S3_scared, + outputs=[W_scared]) + S2_verified_key.upon(got_message_good, enter=S2_verified_key, + outputs=[W_got_message]) + S3_scared.upon(got_message_good, enter=S3_scared, outputs=[]) + S3_scared.upon(got_message_bad, enter=S3_scared, outputs=[]) + diff --git a/src/wormhole/_rendezvous.py b/src/wormhole/_rendezvous.py new file mode 100644 index 0000000..4029e3f --- /dev/null +++ b/src/wormhole/_rendezvous.py @@ -0,0 +1,250 @@ +from __future__ import print_function, absolute_import, unicode_literals +import os +from six.moves.urllib_parse import urlparse +from attr import attrs, attrib +from attr.validators import provides, instance_of +from zope.interface import implementer +from twisted.python import log +from twisted.internet import defer, endpoints +from twisted.application import internet +from autobahn.twisted import websocket +from . import _interfaces, errors +from .util import (bytes_to_hexstr, hexstr_to_bytes, + bytes_to_dict, dict_to_bytes) + +class WSClient(websocket.WebSocketClientProtocol): + def onConnect(self, response): + # this fires during WebSocket negotiation, and isn't very useful + # unless you want to modify the protocol settings + #print("onConnect", response) + pass + + def onOpen(self, *args): + # this fires when the WebSocket is ready to go. No arguments + #print("onOpen", args) + #self.wormhole_open = True + self._RC.ws_open(self) + + def onMessage(self, payload, isBinary): + assert not isBinary + try: + self._RC.ws_message(payload) + except: + from twisted.python.failure import Failure + print("LOGGING", Failure()) + log.err() + raise + + def onClose(self, wasClean, code, reason): + #print("onClose") + self._RC.ws_close(wasClean, code, reason) + #if self.wormhole_open: + # self.wormhole._ws_closed(wasClean, code, reason) + #else: + # # we closed before establishing a connection (onConnect) or + # # finishing WebSocket negotiation (onOpen): errback + # self.factory.d.errback(error.ConnectError(reason)) + +class WSFactory(websocket.WebSocketClientFactory): + protocol = WSClient + def __init__(self, RC, *args, **kwargs): + websocket.WebSocketClientFactory.__init__(self, *args, **kwargs) + self._RC = RC + + def buildProtocol(self, addr): + proto = websocket.WebSocketClientFactory.buildProtocol(self, addr) + proto._RC = self._RC + #proto.wormhole_open = False + return proto + +@attrs +@implementer(_interfaces.IRendezvousConnector) +class RendezvousConnector(object): + _url = attrib(validator=instance_of(type(u""))) + _appid = attrib(validator=instance_of(type(u""))) + _side = attrib(validator=instance_of(type(u""))) + _reactor = attrib() + _journal = attrib(validator=provides(_interfaces.IJournal)) + _tor_manager = attrib() # TODO: ITorManager or None + _timing = attrib(validator=provides(_interfaces.ITiming)) + + def __attrs_post_init__(self): + self._trace = None + self._ws = None + f = WSFactory(self, self._url) + f.setProtocolOptions(autoPingInterval=60, autoPingTimeout=600) + p = urlparse(self._url) + ep = self._make_endpoint(p.hostname, p.port or 80) + # TODO: change/wrap ClientService to fail if the first attempt fails + self._connector = internet.ClientService(ep, f) + + def set_trace(self, f): + self._trace = f + def _debug(self, what): + if self._trace: + self._trace(old_state="", input=what, new_state="", output=None) + + def _make_endpoint(self, hostname, port): + if self._tor_manager: + return self._tor_manager.get_endpoint_for(hostname, port) + return endpoints.HostnameEndpoint(self._reactor, hostname, port) + + def wire(self, boss, nameplate, mailbox, allocator, lister, terminator): + self._B = _interfaces.IBoss(boss) + self._N = _interfaces.INameplate(nameplate) + self._M = _interfaces.IMailbox(mailbox) + self._A = _interfaces.IAllocator(allocator) + self._L = _interfaces.ILister(lister) + self._T = _interfaces.ITerminator(terminator) + + # from Boss + def start(self): + self._connector.startService() + + # from Mailbox + def tx_claim(self, nameplate): + self._tx("claim", nameplate=nameplate) + + def tx_open(self, mailbox): + self._tx("open", mailbox=mailbox) + + def tx_add(self, phase, body): + assert isinstance(phase, type("")), type(phase) + assert isinstance(body, type(b"")), type(body) + self._tx("add", phase=phase, body=bytes_to_hexstr(body)) + + def tx_release(self, nameplate): + self._tx("release", nameplate=nameplate) + + def tx_close(self, mailbox, mood): + self._tx("close", mailbox=mailbox, mood=mood) + + def stop(self): + d = defer.maybeDeferred(self._connector.stopService) + d.addErrback(log.err) # TODO: deliver error upstairs? + d.addBoth(self._stopped) + + + # from Lister + def tx_list(self): + self._tx("list") + + # from Code + def tx_allocate(self): + self._tx("allocate") + + # from our WSClient (the WebSocket protocol) + def ws_open(self, proto): + self._debug("R.connected") + self._ws = proto + try: + self._tx("bind", appid=self._appid, side=self._side) + self._N.connected() + self._M.connected() + self._L.connected() + self._A.connected() + except Exception as e: + self._B.error(e) + raise + self._debug("R.connected finished notifications") + + def ws_message(self, payload): + msg = bytes_to_dict(payload) + if msg["type"] != "ack": + self._debug("R.rx(%s %s%s)" % + (msg["type"], msg.get("phase",""), + "[mine]" if msg.get("side","") == self._side else "", + )) + + self._timing.add("ws_receive", _side=self._side, message=msg) + mtype = msg["type"] + meth = getattr(self, "_response_handle_"+mtype, None) + if not meth: + # make tests fail, but real application will ignore it + log.err(errors._UnknownMessageTypeError("Unknown inbound message type %r" % (msg,))) + return + try: + return meth(msg) + except Exception as e: + log.err(e) + self._B.error(e) + raise + + def ws_close(self, wasClean, code, reason): + self._debug("R.lost") + self._ws = None + self._N.lost() + self._M.lost() + self._L.lost() + self._A.lost() + + # internal + def _stopped(self, res): + self._T.stopped() + + def _tx(self, mtype, **kwargs): + assert self._ws + # msgid is used by misc/dump-timing.py to correlate our sends with + # their receives, and vice versa. They are also correlated with the + # ACKs we get back from the server (which we otherwise ignore). There + # are so few messages, 16 bits is enough to be mostly-unique. + kwargs["id"] = bytes_to_hexstr(os.urandom(2)) + kwargs["type"] = mtype + self._debug("R.tx(%s %s)" % (mtype.upper(), kwargs.get("phase", ""))) + payload = dict_to_bytes(kwargs) + self._timing.add("ws_send", _side=self._side, **kwargs) + self._ws.sendMessage(payload, False) + + def _response_handle_allocated(self, msg): + nameplate = msg["nameplate"] + assert isinstance(nameplate, type("")), type(nameplate) + self._A.rx_allocated(nameplate) + + def _response_handle_nameplates(self, msg): + # we get list of {id: ID}, with maybe more attributes in the future + nameplates = msg["nameplates"] + assert isinstance(nameplates, list), type(nameplates) + nids = set() + for n in nameplates: + assert isinstance(n, dict), type(n) + nameplate_id = n["id"] + assert isinstance(nameplate_id, type("")), type(nameplate_id) + nids.add(nameplate_id) + # deliver a set of nameplate ids + self._L.rx_nameplates(nids) + + def _response_handle_ack(self, msg): + pass + + def _response_handle_error(self, msg): + # the server sent us a type=error. Most cases are due to our mistakes + # (malformed protocol messages, sending things in the wrong order), + # but it can also result from CrowdedError (more than two clients + # using the same channel). + err = msg["error"] + orig = msg["orig"] + self._B.rx_error(err, orig) + + def _response_handle_welcome(self, msg): + self._B.rx_welcome(msg["welcome"]) + + def _response_handle_claimed(self, msg): + mailbox = msg["mailbox"] + assert isinstance(mailbox, type("")), type(mailbox) + self._N.rx_claimed(mailbox) + + def _response_handle_message(self, msg): + side = msg["side"] + phase = msg["phase"] + assert isinstance(phase, type("")), type(phase) + body = hexstr_to_bytes(msg["body"]) # bytes + self._M.rx_message(side, phase, body) + + def _response_handle_released(self, msg): + self._N.rx_released() + + def _response_handle_closed(self, msg): + self._M.rx_closed() + + + # record, message, payload, packet, bundle, ciphertext, plaintext diff --git a/src/wormhole/_rlcompleter.py b/src/wormhole/_rlcompleter.py new file mode 100644 index 0000000..a525a06 --- /dev/null +++ b/src/wormhole/_rlcompleter.py @@ -0,0 +1,201 @@ +from __future__ import print_function, unicode_literals +import os, traceback +from sys import stderr +try: + import readline +except ImportError: + readline = None +from six.moves import input +from attr import attrs, attrib +from twisted.internet.defer import inlineCallbacks, returnValue +from twisted.internet.threads import deferToThread, blockingCallFromThread +from .errors import KeyFormatError, AlreadyInputNameplateError + +errf = None +if 0: + errf = open("err", "w") if os.path.exists("err") else None +def debug(*args, **kwargs): + if errf: + print(*args, file=errf, **kwargs) + errf.flush() + +@attrs +class CodeInputter(object): + _input_helper = attrib() + _reactor = attrib() + def __attrs_post_init__(self): + self.used_completion = False + self._matches = None + # once we've claimed the nameplate, we can't go back + self._committed_nameplate = None # or string + + def bcft(self, f, *a, **kw): + return blockingCallFromThread(self._reactor, f, *a, **kw) + + def completer(self, text, state): + try: + return self._wrapped_completer(text, state) + except Exception as e: + # completer exceptions are normally silently discarded, which + # makes debugging challenging + print("completer exception: %s" % e) + traceback.print_exc() + raise e + + def _wrapped_completer(self, text, state): + self.used_completion = True + # if we get here, then readline must be active + ct = readline.get_completion_type() + if state == 0: + debug("completer starting (%s) (state=0) (ct=%d)" % (text, ct)) + self._matches = self._commit_and_build_completions(text) + debug(" matches:", " ".join(["'%s'" % m for m in self._matches])) + else: + debug(" s%d t'%s' ct=%d" % (state, text, ct)) + + if state >= len(self._matches): + debug(" returning None") + return None + debug(" returning '%s'" % self._matches[state]) + return self._matches[state] + + def _commit_and_build_completions(self, text): + ih = self._input_helper + if "-" in text: + got_nameplate = True + nameplate, words = text.split("-", 1) + else: + got_nameplate = False + nameplate = text # partial + + # 'text' is one of these categories: + # "" or "12": complete on nameplates (all that match, maybe just one) + + # "123-": if we haven't already committed to a nameplate, commit and + # wait for the wordlist. Then (either way) return the whole wordlist. + + # "123-supp": if we haven't already committed to a nameplate, commit + # and wait for the wordlist. Then (either way) return all current + # matches. + + if self._committed_nameplate: + if not got_nameplate or nameplate != self._committed_nameplate: + # they deleted past the committment point: we can't use + # this. For now, bail, but in the future let's find a + # gentler way to encourage them to not do that. + raise AlreadyInputNameplateError("nameplate (%s-) already entered, cannot go back" % self._committed_nameplate) + if not got_nameplate: + # we're completing on nameplates: "" or "12" or "123" + self.bcft(ih.refresh_nameplates) # results arrive later + debug(" getting nameplates") + completions = self.bcft(ih.get_nameplate_completions, nameplate) + else: # "123-" or "123-supp" + # time to commit to this nameplate, if they haven't already + if not self._committed_nameplate: + debug(" choose_nameplate(%s)" % nameplate) + self.bcft(ih.choose_nameplate, nameplate) + self._committed_nameplate = nameplate + + # Now we want to wait for the wordlist to be available. If + # the user just typed "12-supp TAB", we'll claim "12" but + # will need a server roundtrip to discover that "supportive" + # is the only match. If we don't block, we'd return an empty + # wordlist to readline (which will beep and show no + # completions). *Then* when the user hits TAB again a moment + # later (after the wordlist has arrived, but the user hasn't + # modified the input line since the previous empty response), + # readline would show one match but not complete anything. + + # In general we want to avoid returning empty lists to + # readline. If the user hits TAB when typing in the nameplate + # (before the sender has established one, or before we're + # heard about it from the server), it can't be helped. But + # for the rest of the code, a simple wait-for-wordlist will + # improve the user experience. + self.bcft(ih.when_wordlist_is_available) # blocks on CLAIM + # and we're completing on words now + debug(" getting words (%s)" % (words,)) + completions = [nameplate+"-"+c + for c in self.bcft(ih.get_word_completions, words)] + + # rlcompleter wants full strings + return sorted(completions) + + def finish(self, text): + if "-" not in text: + raise KeyFormatError("incomplete wormhole code") + nameplate, words = text.split("-", 1) + + if self._committed_nameplate: + if nameplate != self._committed_nameplate: + # they deleted past the committment point: we can't use + # this. For now, bail, but in the future let's find a + # gentler way to encourage them to not do that. + raise AlreadyInputNameplateError("nameplate (%s-) already entered, cannot go back" % self._committed_nameplate) + else: + debug(" choose_nameplate(%s)" % nameplate) + self._input_helper.choose_nameplate(nameplate) + debug(" choose_words(%s)" % words) + self._input_helper.choose_words(words) + +def _input_code_with_completion(prompt, input_helper, reactor): + c = CodeInputter(input_helper, reactor) + if readline is not None: + if readline.__doc__ and "libedit" in readline.__doc__: + readline.parse_and_bind("bind ^I rl_complete") + else: + readline.parse_and_bind("tab: complete") + readline.set_completer(c.completer) + readline.set_completer_delims("") + debug("==== readline-based completion is prepared") + else: + debug("==== unable to import readline, disabling completion") + pass + code = input(prompt) + # Code is str(bytes) on py2, and str(unicode) on py3. We want unicode. + if isinstance(code, bytes): + code = code.decode("utf-8") + c.finish(code) + return c.used_completion + +def warn_readline(): + # When our process receives a SIGINT, Twisted's SIGINT handler will + # stop the reactor and wait for all threads to terminate before the + # process exits. However, if we were waiting for + # input_code_with_completion() when SIGINT happened, the readline + # thread will be blocked waiting for something on stdin. Trick the + # user into satisfying the blocking read so we can exit. + print("\nCommand interrupted: please press Return to quit", file=stderr) + + # Other potential approaches to this problem: + # * hard-terminate our process with os._exit(1), but make sure the + # tty gets reset to a normal mode ("cooked"?) first, so that the + # next shell command the user types is echoed correctly + # * track down the thread (t.p.threadable.getThreadID from inside the + # thread), get a cffi binding to pthread_kill, deliver SIGINT to it + # * allocate a pty pair (pty.openpty), replace sys.stdin with the + # slave, build a pty bridge that copies bytes (and other PTY + # things) from the real stdin to the master, then close the slave + # at shutdown, so readline sees EOF + # * write tab-completion and basic editing (TTY raw mode, + # backspace-is-erase) without readline, probably with curses or + # twisted.conch.insults + # * write a separate program to get codes (maybe just "wormhole + # --internal-get-code"), run it as a subprocess, let it inherit + # stdin/stdout, send it SIGINT when we receive SIGINT ourselves. It + # needs an RPC mechanism (over some extra file descriptors) to ask + # us to fetch the current nameplate_id list. + # + # Note that hard-terminating our process with os.kill(os.getpid(), + # signal.SIGKILL), or SIGTERM, doesn't seem to work: the thread + # doesn't see the signal, and we must still wait for stdin to make + # readline finish. + +@inlineCallbacks +def input_with_completion(prompt, input_helper, reactor): + t = reactor.addSystemEventTrigger("before", "shutdown", warn_readline) + #input_helper.refresh_nameplates() + used_completion = yield deferToThread(_input_code_with_completion, + prompt, input_helper, reactor) + reactor.removeSystemEventTrigger(t) + returnValue(used_completion) diff --git a/src/wormhole/_send.py b/src/wormhole/_send.py new file mode 100644 index 0000000..762b2fa --- /dev/null +++ b/src/wormhole/_send.py @@ -0,0 +1,64 @@ +from __future__ import print_function, absolute_import, unicode_literals +from attr import attrs, attrib +from attr.validators import provides, instance_of +from zope.interface import implementer +from automat import MethodicalMachine +from . import _interfaces +from ._key import derive_phase_key, encrypt_data + +@attrs +@implementer(_interfaces.ISend) +class Send(object): + _side = attrib(validator=instance_of(type(u""))) + _timing = attrib(validator=provides(_interfaces.ITiming)) + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __attrs_post_init__(self): + self._queue = [] + + def wire(self, mailbox): + self._M = _interfaces.IMailbox(mailbox) + + @m.state(initial=True) + def S0_no_key(self): pass # pragma: no cover + @m.state(terminal=True) + def S1_verified_key(self): pass # pragma: no cover + + # from Receive + @m.input() + def got_verified_key(self, key): pass + # from Boss + @m.input() + def send(self, phase, plaintext): pass + + @m.output() + def queue(self, phase, plaintext): + assert isinstance(phase, type("")), type(phase) + assert isinstance(plaintext, type(b"")), type(plaintext) + self._queue.append((phase, plaintext)) + @m.output() + def record_key(self, key): + self._key = key + @m.output() + def drain(self, key): + del key + for (phase, plaintext) in self._queue: + self._encrypt_and_send(phase, plaintext) + self._queue[:] = [] + @m.output() + def deliver(self, phase, plaintext): + assert isinstance(phase, type("")), type(phase) + assert isinstance(plaintext, type(b"")), type(plaintext) + self._encrypt_and_send(phase, plaintext) + + def _encrypt_and_send(self, phase, plaintext): + assert self._key + data_key = derive_phase_key(self._key, self._side, phase) + encrypted = encrypt_data(data_key, plaintext) + self._M.add_message(phase, encrypted) + + S0_no_key.upon(send, enter=S0_no_key, outputs=[queue]) + S0_no_key.upon(got_verified_key, enter=S1_verified_key, + outputs=[record_key, drain]) + S1_verified_key.upon(send, enter=S1_verified_key, outputs=[deliver]) diff --git a/src/wormhole/_terminator.py b/src/wormhole/_terminator.py new file mode 100644 index 0000000..f90f7e5 --- /dev/null +++ b/src/wormhole/_terminator.py @@ -0,0 +1,106 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +from automat import MethodicalMachine +from . import _interfaces + +@implementer(_interfaces.ITerminator) +class Terminator(object): + m = MethodicalMachine() + set_trace = getattr(m, "setTrace", lambda self, f: None) + + def __init__(self): + self._mood = None + + def wire(self, boss, rendezvous_connector, nameplate, mailbox): + self._B = _interfaces.IBoss(boss) + self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) + self._N = _interfaces.INameplate(nameplate) + self._M = _interfaces.IMailbox(mailbox) + + # 4*2-1 main states: + # (nm, m, n, 0): nameplate and/or mailbox is active + # (o, ""): open (not-yet-closing), or trying to close + # S0 is special: we don't hang out in it + + # TODO: rename o to 0, "" to 1. "S1" is special/terminal + # so S0nm/S0n/S0m/S0, S1nm/S1n/S1m/(S1) + + # We start in Snmo (non-closing). When both nameplate and mailboxes are + # done, and we're closing, then we stop the RendezvousConnector + + @m.state(initial=True) + def Snmo(self): pass # pragma: no cover + @m.state() + def Smo(self): pass # pragma: no cover + @m.state() + def Sno(self): pass # pragma: no cover + @m.state() + def S0o(self): pass # pragma: no cover + + @m.state() + def Snm(self): pass # pragma: no cover + @m.state() + def Sm(self): pass # pragma: no cover + @m.state() + def Sn(self): pass # pragma: no cover + #@m.state() + #def S0(self): pass # unused + + @m.state() + def S_stopping(self): pass # pragma: no cover + @m.state() + def S_stopped(self, terminal=True): pass # pragma: no cover + + # from Boss + @m.input() + def close(self, mood): pass + + # from Nameplate + @m.input() + def nameplate_done(self): pass + + # from Mailbox + @m.input() + def mailbox_done(self): pass + + # from RendezvousConnector + @m.input() + def stopped(self): pass + + + @m.output() + def close_nameplate(self, mood): + self._N.close() # ignores mood + @m.output() + def close_mailbox(self, mood): + self._M.close(mood) + + @m.output() + def ignore_mood_and_RC_stop(self, mood): + self._RC.stop() + @m.output() + def RC_stop(self): + self._RC.stop() + @m.output() + def B_closed(self): + self._B.closed() + + Snmo.upon(mailbox_done, enter=Sno, outputs=[]) + Snmo.upon(close, enter=Snm, outputs=[close_nameplate, close_mailbox]) + Snmo.upon(nameplate_done, enter=Smo, outputs=[]) + + Sno.upon(close, enter=Sn, outputs=[close_nameplate, close_mailbox]) + Sno.upon(nameplate_done, enter=S0o, outputs=[]) + + Smo.upon(close, enter=Sm, outputs=[close_nameplate, close_mailbox]) + Smo.upon(mailbox_done, enter=S0o, outputs=[]) + + Snm.upon(mailbox_done, enter=Sn, outputs=[]) + Snm.upon(nameplate_done, enter=Sm, outputs=[]) + + Sn.upon(nameplate_done, enter=S_stopping, outputs=[RC_stop]) + S0o.upon(close, enter=S_stopping, + outputs=[close_nameplate, close_mailbox, ignore_mood_and_RC_stop]) + Sm.upon(mailbox_done, enter=S_stopping, outputs=[RC_stop]) + + S_stopping.upon(stopped, enter=S_stopped, outputs=[B_closed]) diff --git a/src/wormhole/wordlist.py b/src/wormhole/_wordlist.py similarity index 87% rename from src/wormhole/wordlist.py rename to src/wormhole/_wordlist.py index fe6c50c..e14972b 100644 --- a/src/wormhole/wordlist.py +++ b/src/wormhole/_wordlist.py @@ -1,4 +1,8 @@ -from __future__ import unicode_literals +from __future__ import unicode_literals, print_function +import os +from zope.interface import implementer +from ._interfaces import IWordlist + # The PGP Word List, which maps bytes to phonetically-distinct words. There # are two lists, even and odd, and encodings should alternate between then to # detect dropped words. https://en.wikipedia.org/wiki/PGP_Words @@ -146,13 +150,44 @@ byte_to_even_word = dict([(unhexlify(k.encode("ascii")), both_words[0]) byte_to_odd_word = dict([(unhexlify(k.encode("ascii")), both_words[1]) for k,both_words in raw_words.items()]) + even_words_lowercase, odd_words_lowercase = set(), set() -even_words_lowercase_to_byte, odd_words_lowercase_to_byte = dict(), dict() + for k,both_words in raw_words.items(): even_word, odd_word = both_words - even_words_lowercase.add(even_word.lower()) - even_words_lowercase_to_byte[even_word.lower()] = unhexlify(k.encode("ascii")) - odd_words_lowercase.add(odd_word.lower()) - odd_words_lowercase_to_byte[odd_word.lower()] = unhexlify(k.encode("ascii")) + +@implementer(IWordlist) +class PGPWordList(object): + def get_completions(self, prefix, num_words=2): + # start with the odd words + count = prefix.count("-") + if count % 2 == 0: + words = odd_words_lowercase + else: + words = even_words_lowercase + last_partial_word = prefix.split("-")[-1] + lp = len(last_partial_word) + completions = set() + for word in words: + if word.startswith(last_partial_word): + if lp == 0: + suffix = prefix + word + else: + suffix = prefix[:-lp] + word + # append a hyphen if we expect more words + if count+1 < num_words: + suffix += "-" + completions.add(suffix) + return completions + + def choose_words(self, length): + words = [] + for i in range(length): + # we start with an "odd word" + if i % 2 == 0: + words.append(byte_to_odd_word[os.urandom(1)].lower()) + else: + words.append(byte_to_even_word[os.urandom(1)].lower()) + return "-".join(words) diff --git a/src/wormhole/channel_monitor.py b/src/wormhole/channel_monitor.py deleted file mode 100644 index f5f4c50..0000000 --- a/src/wormhole/channel_monitor.py +++ /dev/null @@ -1,16 +0,0 @@ -from __future__ import print_function, unicode_literals -import sys -from weakref import ref - -class ChannelMonitor: - def __init__(self): - self._open_channels = set() - def add(self, w): - wr = ref(w, self._lost) - self._open_channels.add(wr) - def _lost(self, wr): - print("Error: a Wormhole instance was not closed", file=sys.stderr) - def close(self, w): - self._open_channels.discard(ref(w)) - -monitor = ChannelMonitor() # singleton diff --git a/src/wormhole/cli/cmd_receive.py b/src/wormhole/cli/cmd_receive.py index c262b9a..eae17a0 100644 --- a/src/wormhole/cli/cmd_receive.py +++ b/src/wormhole/cli/cmd_receive.py @@ -5,14 +5,17 @@ from humanize import naturalsize from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue from twisted.python import log -from ..wormhole import wormhole +from wormhole import create, input_with_completion, __version__ from ..transit import TransitReceiver from ..errors import TransferError, WormholeClosedError, NoTorError from ..util import (dict_to_bytes, bytes_to_dict, bytes_to_hexstr, estimate_free_space) +from .welcome import CLIWelcomeHandler APPID = u"lothar.com/wormhole/text-or-file-xfer" -VERIFY_TIMER = 1 + +KEY_TIMER = 1.0 +VERIFY_TIMER = 1.0 class RespondError(Exception): def __init__(self, response): @@ -61,8 +64,13 @@ class TwistedReceiver: # with the user handing off the wormhole code yield self._tor_manager.start() - w = wormhole(self.args.appid or APPID, self.args.relay_url, - self._reactor, self._tor_manager, timing=self.args.timing) + wh = CLIWelcomeHandler(self.args.relay_url, __version__, + self.args.stderr) + w = create(self.args.appid or APPID, self.args.relay_url, + self._reactor, + tor_manager=self._tor_manager, + timing=self.args.timing, + welcome_handler=wh.handle_welcome) # I wanted to do this instead: # # try: @@ -74,23 +82,71 @@ class TwistedReceiver: # as coming from the "yield self._go" line, which wasn't very useful # for tracking it down. d = self._go(w) - d.addBoth(w.close) + + # if we succeed, we should close and return the w.close results + # (which might be an error) + @inlineCallbacks + def _good(res): + yield w.close() # wait for ack + returnValue(res) + + # if we raise an error, we should close and then return the original + # error (the close might give us an error, but it isn't as important + # as the original one) + @inlineCallbacks + def _bad(f): + log.err(f) + try: + yield w.close() # might be an error too + except: + pass + returnValue(f) + + d.addCallbacks(_good, _bad) yield d @inlineCallbacks def _go(self, w): yield self._handle_code(w) - yield w.establish_key() - def on_slow_connection(): - print(u"Key established, waiting for confirmation...", - file=self.args.stderr) - notify = self._reactor.callLater(VERIFY_TIMER, on_slow_connection) + + def on_slow_key(): + print(u"Waiting for sender...", file=self.args.stderr) + notify = self._reactor.callLater(KEY_TIMER, on_slow_key) try: - verifier = yield w.verify() + # We wait here until we connect to the server and see the senders + # PAKE message. If we used set_code() in the "human-selected + # offline codes" mode, then the sender might not have even + # started yet, so we might be sitting here for a while. Because + # of that possibility, it's probably not appropriate to give up + # automatically after some timeout. The user can express their + # impatience by quitting the program with control-C. + yield w.when_key() finally: if not notify.called: notify.cancel() - self._show_verifier(verifier) + + def on_slow_verification(): + print(u"Key established, waiting for confirmation...", + file=self.args.stderr) + notify = self._reactor.callLater(VERIFY_TIMER, on_slow_verification) + try: + # We wait here until we've seen their VERSION message (which they + # send after seeing our PAKE message, and has the side-effect of + # verifying that we both share the same key). There is a + # round-trip between these two events, and we could experience a + # significant delay here if: + # * the relay server is being restarted + # * the network is very slow + # * the sender is very slow + # * the sender has quit (in which case we may wait forever) + + # It would be reasonable to give up after waiting here for too + # long. + verifier_bytes = yield w.when_verified() + finally: + if not notify.called: + notify.cancel() + self._show_verifier(verifier_bytes) want_offer = True done = False @@ -127,7 +183,7 @@ class TwistedReceiver: @inlineCallbacks def _get_data(self, w): # this may raise WrongPasswordError - them_bytes = yield w.get() + them_bytes = yield w.when_received() them_d = bytes_to_dict(them_bytes) if "error" in them_d: raise TransferError(them_d["error"]) @@ -142,11 +198,17 @@ class TwistedReceiver: if code: w.set_code(code) else: - yield w.input_code("Enter receive wormhole code: ", - self.args.code_length) + prompt = "Enter receive wormhole code: " + used_completion = yield input_with_completion(prompt, + w.input_code(), + self._reactor) + if not used_completion: + print(" (note: you can use to complete words)", + file=self.args.stderr) + yield w.when_code() - def _show_verifier(self, verifier): - verifier_hex = bytes_to_hexstr(verifier) + def _show_verifier(self, verifier_bytes): + verifier_hex = bytes_to_hexstr(verifier_bytes) if self.args.verify: self._msg(u"Verifier %s." % verifier_hex) diff --git a/src/wormhole/cli/cmd_send.py b/src/wormhole/cli/cmd_send.py index 12dd3bb..c51a115 100644 --- a/src/wormhole/cli/cmd_send.py +++ b/src/wormhole/cli/cmd_send.py @@ -7,9 +7,10 @@ from twisted.protocols import basic from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue from ..errors import TransferError, WormholeClosedError, NoTorError -from ..wormhole import wormhole +from wormhole import create, __version__ from ..transit import TransitSender from ..util import dict_to_bytes, bytes_to_dict, bytes_to_hexstr +from .welcome import CLIWelcomeHandler APPID = u"lothar.com/wormhole/text-or-file-xfer" VERIFY_TIMER = 1 @@ -52,11 +53,35 @@ class Sender: # with the user handing off the wormhole code yield self._tor_manager.start() - w = wormhole(self._args.appid or APPID, self._args.relay_url, - self._reactor, self._tor_manager, - timing=self._timing) + wh = CLIWelcomeHandler(self._args.relay_url, __version__, + self._args.stderr) + w = create(self._args.appid or APPID, self._args.relay_url, + self._reactor, + tor_manager=self._tor_manager, + timing=self._timing, + welcome_handler=wh.handle_welcome) d = self._go(w) - d.addBoth(w.close) # must wait for ack from close() + + # if we succeed, we should close and return the w.close results + # (which might be an error) + @inlineCallbacks + def _good(res): + yield w.close() # wait for ack + returnValue(res) + + # if we raise an error, we should close and then return the original + # error (the close might give us an error, but it isn't as important + # as the original one) + @inlineCallbacks + def _bad(f): + log.err(f) + try: + yield w.close() # might be an error too + except: + pass + returnValue(f) + + d.addCallbacks(_good, _bad) yield d def _send_data(self, data, w): @@ -83,40 +108,44 @@ class Sender: if args.code: w.set_code(args.code) - code = args.code else: - code = yield w.get_code(args.code_length) + w.allocate_code(args.code_length) + code = yield w.when_code() if not args.zeromode: print(u"Wormhole code is: %s" % code, file=args.stderr) # flush stderr so the code is displayed immediately args.stderr.flush() print(u"", file=args.stderr) - yield w.establish_key() + # We don't print a "waiting" message for when_key() here, even though + # we do that in cmd_receive.py, because it's not at all surprising to + # we waiting here for a long time. We'll sit in when_key() until the + # receiver has typed in the code and their PAKE message makes it to + # us. + yield w.when_key() + + # TODO: don't stall on w.verify() unless they want it def on_slow_connection(): print(u"Key established, waiting for confirmation...", file=args.stderr) notify = self._reactor.callLater(VERIFY_TIMER, on_slow_connection) - - # TODO: don't stall on w.verify() unless they want it try: - verifier_bytes = yield w.verify() # this may raise WrongPasswordError + # The usual sender-chooses-code sequence means the receiver's + # PAKE should be followed immediately by their VERSION, so + # w.when_verified() should fire right away. However if we're + # using the offline-codes sequence, and the receiver typed in + # their code first, and then they went offline, we might be + # sitting here for a while, so printing the "waiting" message + # seems like a good idea. It might even be appropriate to give up + # after a while. + verifier_bytes = yield w.when_verified() # might WrongPasswordError finally: if not notify.called: notify.cancel() if args.verify: - verifier = bytes_to_hexstr(verifier_bytes) - while True: - ok = six.moves.input("Verifier %s. ok? (yes/no): " % verifier) - if ok.lower() == "yes": - break - if ok.lower() == "no": - err = "sender rejected verification check, abandoned transfer" - reject_data = dict_to_bytes({"error": err}) - w.send(reject_data) - raise TransferError(err) + self._check_verifier(w, verifier_bytes) # blocks, can TransferError if self._fd_to_send: ts = TransitSender(args.transit_helper, @@ -146,12 +175,13 @@ class Sender: while True: try: - them_d_bytes = yield w.get() + them_d_bytes = yield w.when_received() except WormholeClosedError: if done: returnValue(None) raise TransferError("unexpected close") - # TODO: get() fired, so now it's safe to use w.derive_key() + # TODO: when_received() fired, so now it's safe to use + # w.derive_key() them_d = bytes_to_dict(them_d_bytes) #print("GOT", them_d) recognized = False @@ -171,6 +201,18 @@ class Sender: if not recognized: log.msg("unrecognized message %r" % (them_d,)) + def _check_verifier(self, w, verifier_bytes): + verifier = bytes_to_hexstr(verifier_bytes) + while True: + ok = six.moves.input("Verifier %s. ok? (yes/no): " % verifier) + if ok.lower() == "yes": + break + if ok.lower() == "no": + err = "sender rejected verification check, abandoned transfer" + reject_data = dict_to_bytes({"error": err}) + w.send(reject_data) + raise TransferError(err) + def _handle_transit(self, receiver_transit): ts = self._transit_sender ts.add_connection_hints(receiver_transit.get("hints-v1", [])) diff --git a/src/wormhole/cli/welcome.py b/src/wormhole/cli/welcome.py new file mode 100644 index 0000000..50a60a4 --- /dev/null +++ b/src/wormhole/cli/welcome.py @@ -0,0 +1,24 @@ +from __future__ import print_function, absolute_import, unicode_literals +import sys +from ..wormhole import _WelcomeHandler + +class CLIWelcomeHandler(_WelcomeHandler): + def __init__(self, url, cli_version, stderr=sys.stderr): + _WelcomeHandler.__init__(self, url, stderr) + self._current_version = cli_version + self._version_warning_displayed = False + + def handle_welcome(self, welcome): + # Only warn if we're running a release version (e.g. 0.0.6, not + # 0.0.6+DISTANCE.gHASH). Only warn once. + if ("current_cli_version" in welcome + and "+" not in self._current_version + and not self._version_warning_displayed + and welcome["current_cli_version"] != self._current_version): + print("Warning: errors may occur unless both sides are running the same version", file=self.stderr) + print("Server claims %s is current, but ours is %s" + % (welcome["current_cli_version"], self._current_version), + file=self.stderr) + self._version_warning_displayed = True + _WelcomeHandler.handle_welcome(self, welcome) + diff --git a/src/wormhole/errors.py b/src/wormhole/errors.py index 7eff520..06f74d3 100644 --- a/src/wormhole/errors.py +++ b/src/wormhole/errors.py @@ -1,33 +1,25 @@ from __future__ import unicode_literals -import functools -class ServerError(Exception): - def __init__(self, message, relay): - self.message = message - self.relay = relay - def __str__(self): - return self.message +class WormholeError(Exception): + """Parent class for all wormhole-related errors""" -def handle_server_error(func): - @functools.wraps(func) - def _wrap(*args, **kwargs): - try: - return func(*args, **kwargs) - except ServerError as e: - print("Server error (from %s):\n%s" % (e.relay, e.message)) - return 1 - return _wrap +class ServerError(WormholeError): + """The relay server complained about something we did.""" -class Timeout(Exception): +class Timeout(WormholeError): pass -class WelcomeError(Exception): +class WelcomeError(WormholeError): """ The relay server told us to signal an error, probably because our version is too old to possibly work. The server said:""" pass -class WrongPasswordError(Exception): +class LonelyError(WormholeError): + """wormhole.close() was called before the peer connection could be + established""" + +class WrongPasswordError(WormholeError): """ Key confirmation failed. Either you or your correspondent typed the code wrong, or a would-be man-in-the-middle attacker guessed incorrectly. You @@ -37,24 +29,54 @@ class WrongPasswordError(Exception): # or the data blob was corrupted, and that's why decrypt failed pass -class KeyFormatError(Exception): +class KeyFormatError(WormholeError): """ - The key you entered contains spaces. Magic-wormhole expects keys to be - separated by dashes. Please reenter the key you were given separating the - words with dashes. + The key you entered contains spaces or was missing a dash. Magic-wormhole + expects the numerical nameplate and the code words to be separated by + dashes. Please reenter the key you were given separating the words with + dashes. """ -class ReflectionAttack(Exception): +class ReflectionAttack(WormholeError): """An attacker (or bug) reflected our outgoing message back to us.""" -class InternalError(Exception): +class InternalError(WormholeError): """The programmer did something wrong.""" class WormholeClosedError(InternalError): """API calls may not be made after close() is called.""" -class TransferError(Exception): +class TransferError(WormholeError): """Something bad happened and the transfer failed.""" -class NoTorError(Exception): +class NoTorError(WormholeError): """--tor was requested, but 'txtorcon' is not installed.""" + +class NoKeyError(WormholeError): + """w.derive_key() was called before got_verifier() fired""" + +class OnlyOneCodeError(WormholeError): + """Only one w.generate_code/w.set_code/w.input_code may be called""" + +class MustChooseNameplateFirstError(WormholeError): + """The InputHelper was asked to do get_word_completions() or + choose_words() before the nameplate was chosen.""" +class AlreadyChoseNameplateError(WormholeError): + """The InputHelper was asked to do get_nameplate_completions() after + choose_nameplate() was called, or choose_nameplate() was called a second + time.""" +class AlreadyChoseWordsError(WormholeError): + """The InputHelper was asked to do get_word_completions() after + choose_words() was called, or choose_words() was called a second time.""" +class AlreadyInputNameplateError(WormholeError): + """The CodeInputter was asked to do completion on a nameplate, when we + had already committed to a different one.""" +class WormholeClosed(Exception): + """Deferred-returning API calls errback with WormholeClosed if the + wormhole was already closed, or if it closes before a real result can be + obtained.""" + +class _UnknownPhaseError(Exception): + """internal exception type, for tests.""" +class _UnknownMessageTypeError(Exception): + """internal exception type, for tests.""" diff --git a/src/wormhole/journal.py b/src/wormhole/journal.py new file mode 100644 index 0000000..f7bf0f3 --- /dev/null +++ b/src/wormhole/journal.py @@ -0,0 +1,38 @@ +from __future__ import print_function, absolute_import, unicode_literals +from zope.interface import implementer +import contextlib +from ._interfaces import IJournal + +@implementer(IJournal) +class Journal(object): + def __init__(self, save_checkpoint): + self._save_checkpoint = save_checkpoint + self._outbound_queue = [] + self._processing = False + + def queue_outbound(self, fn, *args, **kwargs): + assert self._processing + self._outbound_queue.append((fn, args, kwargs)) + + @contextlib.contextmanager + def process(self): + assert not self._processing + assert not self._outbound_queue + self._processing = True + yield # process inbound messages, change state, queue outbound + self._save_checkpoint() + for (fn, args, kwargs) in self._outbound_queue: + fn(*args, **kwargs) + self._outbound_queue[:] = [] + self._processing = False + + +@implementer(IJournal) +class ImmediateJournal(object): + def __init__(self): + pass + def queue_outbound(self, fn, *args, **kwargs): + fn(*args, **kwargs) + @contextlib.contextmanager + def process(self): + yield diff --git a/src/wormhole/test/common.py b/src/wormhole/test/common.py index f3b57ad..f0db4ac 100644 --- a/src/wormhole/test/common.py +++ b/src/wormhole/test/common.py @@ -1,6 +1,6 @@ # no unicode_literals untill twisted update from twisted.application import service -from twisted.internet import defer, task +from twisted.internet import defer, task, reactor from twisted.python import log from click.testing import CliRunner import mock @@ -84,3 +84,17 @@ def config(*argv): cfg = go.call_args[0][1] return cfg +@defer.inlineCallbacks +def poll_until(predicate): + # return a Deferred that won't fire until the predicate is True + while not predicate(): + d = defer.Deferred() + reactor.callLater(0.001, d.callback, None) + yield d + +@defer.inlineCallbacks +def pause_one_tick(): + # return a Deferred that won't fire until at least the next reactor tick + d = defer.Deferred() + reactor.callLater(0.001, d.callback, None) + yield d diff --git a/src/wormhole/test/test_scripts.py b/src/wormhole/test/test_cli.py similarity index 84% rename from src/wormhole/test/test_scripts.py rename to src/wormhole/test/test_cli.py index 49562e0..086f18c 100644 --- a/src/wormhole/test/test_scripts.py +++ b/src/wormhole/test/test_cli.py @@ -6,10 +6,10 @@ from twisted.trial import unittest from twisted.python import procutils, log from twisted.internet import defer, endpoints, reactor from twisted.internet.utils import getProcessOutputAndValue -from twisted.internet.defer import gatherResults, inlineCallbacks +from twisted.internet.defer import gatherResults, inlineCallbacks, returnValue from .. import __version__ from .common import ServerBase, config -from ..cli import cmd_send, cmd_receive +from ..cli import cmd_send, cmd_receive, welcome from ..errors import TransferError, WrongPasswordError, WelcomeError @@ -141,6 +141,45 @@ class OfferData(unittest.TestCase): self.assertEqual(str(e), "'%s' is neither file nor directory" % filename) +class LocaleFinder: + def __init__(self): + self._run_once = False + + @inlineCallbacks + def find_utf8_locale(self): + if self._run_once: + returnValue(self._best_locale) + self._best_locale = yield self._find_utf8_locale() + self._run_once = True + returnValue(self._best_locale) + + @inlineCallbacks + def _find_utf8_locale(self): + # Click really wants to be running under a unicode-capable locale, + # especially on python3. macOS has en-US.UTF-8 but not C.UTF-8, and + # most linux boxes have C.UTF-8 but not en-US.UTF-8 . For tests, + # figure out which one is present and use that. For runtime, it's a + # mess, as really the user must take responsibility for setting their + # locale properly. I'm thinking of abandoning Click and going back to + # twisted.python.usage to avoid this problem in the future. + (out, err, rc) = yield getProcessOutputAndValue("locale", ["-a"]) + if rc != 0: + log.msg("error running 'locale -a', rc=%s" % (rc,)) + log.msg("stderr: %s" % (err,)) + returnValue(None) + out = out.decode("utf-8") # make sure we get a string + utf8_locales = {} + for locale in out.splitlines(): + locale = locale.strip() + if locale.lower().endswith((".utf-8", ".utf8")): + utf8_locales[locale.lower()] = locale + for wanted in ["C.utf8", "C.UTF-8", "en_US.utf8", "en_US.UTF-8"]: + if wanted.lower() in utf8_locales: + returnValue(utf8_locales[wanted.lower()]) + if utf8_locales: + returnValue(list(utf8_locales.values())[0]) + returnValue(None) +locale_finder = LocaleFinder() class ScriptsBase: def find_executable(self): @@ -159,6 +198,7 @@ class ScriptsBase: % (wormhole, sys.executable)) return wormhole + @inlineCallbacks def is_runnable(self): # One property of Versioneer is that many changes to the source tree # (making a commit, dirtying a previously-clean tree) will change the @@ -175,21 +215,22 @@ class ScriptsBase: # Setting LANG/LC_ALL to a unicode-capable locale is necessary to # convince Click to not complain about a forced-ascii locale. My # apologies to folks who want to run tests on a machine that doesn't - # have the en_US.UTF-8 locale installed. + # have the C.UTF-8 locale installed. + locale = yield locale_finder.find_utf8_locale() + if not locale: + raise unittest.SkipTest("unable to find UTF-8 locale") + locale_env = dict(LC_ALL=locale, LANG=locale) wormhole = self.find_executable() - d = getProcessOutputAndValue(wormhole, ["--version"], - env=dict(LC_ALL="en_US.UTF-8", - LANG="en_US.UTF-8")) - def _check(res): - out, err, rc = res - if rc != 0: - log.msg("wormhole not runnable in this tree:") - log.msg("out", out) - log.msg("err", err) - log.msg("rc", rc) - raise unittest.SkipTest("wormhole is not runnable in this tree") - d.addCallback(_check) - return d + res = yield getProcessOutputAndValue(wormhole, ["--version"], + env=locale_env) + out, err, rc = res + if rc != 0: + log.msg("wormhole not runnable in this tree:") + log.msg("out", out) + log.msg("err", err) + log.msg("rc", rc) + raise unittest.SkipTest("wormhole is not runnable in this tree") + returnValue(locale_env) class ScriptVersion(ServerBase, ScriptsBase, unittest.TestCase): # we need Twisted to run the server, but we run the sender and receiver @@ -204,7 +245,8 @@ class ScriptVersion(ServerBase, ScriptsBase, unittest.TestCase): wormhole = self.find_executable() # we must pass on the environment so that "something" doesn't # get sad about UTF8 vs. ascii encodings - out, err, rc = yield getProcessOutputAndValue(wormhole, ["--version"], env=os.environ) + out, err, rc = yield getProcessOutputAndValue(wormhole, ["--version"], + env=os.environ) err = err.decode("utf-8") if "DistributionNotFound" in err: log.msg("stderr was %s" % err) @@ -230,16 +272,17 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): # we need Twisted to run the server, but we run the sender and receiver # with deferToThread() + @inlineCallbacks def setUp(self): - d = self.is_runnable() - d.addCallback(lambda _: ServerBase.setUp(self)) - return d + self._env = yield self.is_runnable() + yield ServerBase.setUp(self) @inlineCallbacks def _do_test(self, as_subprocess=False, mode="text", addslash=False, override_filename=False, fake_tor=False, overwrite=False, mock_accept=False): - assert mode in ("text", "file", "empty-file", "directory", "slow-text") + assert mode in ("text", "file", "empty-file", "directory", + "slow-text", "slow-sender-text") if fake_tor: assert not as_subprocess send_cfg = config("send") @@ -260,7 +303,7 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): receive_dir = self.mktemp() os.mkdir(receive_dir) - if mode in ("text", "slow-text"): + if mode in ("text", "slow-text", "slow-sender-text"): send_cfg.text = message elif mode in ("file", "empty-file"): @@ -335,7 +378,7 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): send_d = getProcessOutputAndValue( wormhole_bin, send_args, path=send_dir, - env=dict(LC_ALL="en_US.UTF-8", LANG="en_US.UTF-8"), + env=self._env, ) recv_args = [ '--relay-url', self.relayurl, @@ -351,7 +394,7 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): receive_d = getProcessOutputAndValue( wormhole_bin, recv_args, path=receive_dir, - env=dict(LC_ALL="en_US.UTF-8", LANG="en_US.UTF-8"), + env=self._env, ) (send_res, receive_res) = yield gatherResults([send_d, receive_d], @@ -386,20 +429,22 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): ) as mrx_tm: receive_d = cmd_receive.receive(recv_cfg) else: - send_d = cmd_send.send(send_cfg) - receive_d = cmd_receive.receive(recv_cfg) + KEY_TIMER = 0 if mode == "slow-sender-text" else 1.0 + with mock.patch.object(cmd_receive, "KEY_TIMER", KEY_TIMER): + send_d = cmd_send.send(send_cfg) + receive_d = cmd_receive.receive(recv_cfg) # The sender might fail, leaving the receiver hanging, or vice # versa. Make sure we don't wait on one side exclusively - if mode == "slow-text": - with mock.patch.object(cmd_send, "VERIFY_TIMER", 0), \ - mock.patch.object(cmd_receive, "VERIFY_TIMER", 0): - yield gatherResults([send_d, receive_d], True) - elif mock_accept: - with mock.patch.object(cmd_receive.six.moves, 'input', return_value='y'): - yield gatherResults([send_d, receive_d], True) - else: - yield gatherResults([send_d, receive_d], True) + VERIFY_TIMER = 0 if mode == "slow-text" else 1.0 + with mock.patch.object(cmd_receive, "VERIFY_TIMER", VERIFY_TIMER): + with mock.patch.object(cmd_send, "VERIFY_TIMER", VERIFY_TIMER): + if mock_accept: + with mock.patch.object(cmd_receive.six.moves, + 'input', return_value='y'): + yield gatherResults([send_d, receive_d], True) + else: + yield gatherResults([send_d, receive_d], True) if fake_tor: expected_endpoints = [("127.0.0.1", self.relayport)] @@ -470,9 +515,14 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): .format(NL=NL), send_stderr) # check receiver - if mode == "text" or mode == "slow-text": + if mode in ("text", "slow-text", "slow-sender-text"): self.assertEqual(receive_stdout, message+NL) - self.assertEqual(receive_stderr, key_established) + if mode == "text": + self.assertEqual(receive_stderr, "") + elif mode == "slow-text": + self.assertEqual(receive_stderr, key_established) + elif mode == "slow-sender-text": + self.assertEqual(receive_stderr, "Waiting for sender...\n") elif mode == "file": self.failUnlessEqual(receive_stdout, "") self.failUnlessIn("Receiving file ({size:s}) into: {name}" @@ -536,6 +586,8 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): def test_slow_text(self): return self._do_test(mode="slow-text") + def test_slow_sender_text(self): + return self._do_test(mode="slow-sender-text") @inlineCallbacks def _do_test_fail(self, mode, failmode): @@ -682,6 +734,7 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): # check server stats self._rendezvous.get_stats() + self.flushLoggedErrors(TransferError) def test_fail_file_noclobber(self): return self._do_test_fail("file", "noclobber") @@ -711,6 +764,9 @@ class NotWelcome(ServerBase, unittest.TestCase): send_d = cmd_send.send(self.cfg) f = yield self.assertFailure(send_d, WelcomeError) self.assertEqual(str(f), "please upgrade XYZ") + # TODO: this comes from log.err() in cmd_send.Sender.go._bad, and I'm + # undecided about whether that ought to be doing log.err or not + self.flushLoggedErrors(WelcomeError) @inlineCallbacks def test_receiver(self): @@ -719,7 +775,7 @@ class NotWelcome(ServerBase, unittest.TestCase): receive_d = cmd_receive.receive(self.cfg) f = yield self.assertFailure(receive_d, WelcomeError) self.assertEqual(str(f), "please upgrade XYZ") - + self.flushLoggedErrors(WelcomeError) class Cleanup(ServerBase, unittest.TestCase): @@ -841,3 +897,44 @@ class AppID(ServerBase, unittest.TestCase): ).fetchall() self.assertEqual(len(used), 1, used) self.assertEqual(used[0]["app_id"], u"appid2") + +class Welcome(unittest.TestCase): + def do(self, welcome_message, my_version="2.0", twice=False): + stderr = io.StringIO() + h = welcome.CLIWelcomeHandler("url", my_version, stderr) + h.handle_welcome(welcome_message) + if twice: + h.handle_welcome(welcome_message) + return stderr.getvalue() + + def test_empty(self): + stderr = self.do({}) + self.assertEqual(stderr, "") + + def test_version_current(self): + stderr = self.do({"current_cli_version": "2.0"}) + self.assertEqual(stderr, "") + + def test_version_old(self): + stderr = self.do({"current_cli_version": "3.0"}) + expected = ("Warning: errors may occur unless both sides are running the same version\n" + + "Server claims 3.0 is current, but ours is 2.0\n") + self.assertEqual(stderr, expected) + + def test_version_old_twice(self): + stderr = self.do({"current_cli_version": "3.0"}, twice=True) + # the handler should only emit the version warning once, even if we + # get multiple Welcome messages (which could happen if we lose the + # connection and then reconnect) + expected = ("Warning: errors may occur unless both sides are running the same version\n" + + "Server claims 3.0 is current, but ours is 2.0\n") + self.assertEqual(stderr, expected) + + def test_version_unreleased(self): + stderr = self.do({"current_cli_version": "3.0"}, + my_version="2.5+middle.something") + self.assertEqual(stderr, "") + + def test_motd(self): + stderr = self.do({"motd": "hello"}) + self.assertEqual(stderr, "Server (at url) says:\n hello\n") diff --git a/src/wormhole/test/test_journal.py b/src/wormhole/test/test_journal.py new file mode 100644 index 0000000..96b9319 --- /dev/null +++ b/src/wormhole/test/test_journal.py @@ -0,0 +1,28 @@ +from __future__ import print_function, absolute_import, unicode_literals +from twisted.trial import unittest +from .. import journal +from .._interfaces import IJournal + +class Journal(unittest.TestCase): + def test_journal(self): + events = [] + j = journal.Journal(lambda: events.append("checkpoint")) + self.assert_(IJournal.providedBy(j)) + + with j.process(): + j.queue_outbound(events.append, "message1") + j.queue_outbound(events.append, "message2") + self.assertEqual(events, []) + self.assertEqual(events, ["checkpoint", "message1", "message2"]) + + def test_immediate(self): + events = [] + j = journal.ImmediateJournal() + self.assert_(IJournal.providedBy(j)) + + with j.process(): + j.queue_outbound(events.append, "message1") + self.assertEqual(events, ["message1"]) + j.queue_outbound(events.append, "message2") + self.assertEqual(events, ["message1", "message2"]) + self.assertEqual(events, ["message1", "message2"]) diff --git a/src/wormhole/test/test_machines.py b/src/wormhole/test/test_machines.py new file mode 100644 index 0000000..233524e --- /dev/null +++ b/src/wormhole/test/test_machines.py @@ -0,0 +1,1385 @@ +from __future__ import print_function, unicode_literals +import json +import mock +from zope.interface import directlyProvides, implementer +from twisted.trial import unittest +from .. import (errors, timing, _order, _receive, _key, _code, _lister, _boss, + _input, _allocator, _send, _terminator, _nameplate, _mailbox) +from .._interfaces import (IKey, IReceive, IBoss, ISend, IMailbox, IOrder, + IRendezvousConnector, ILister, IInput, IAllocator, + INameplate, ICode, IWordlist, ITerminator) +from .._key import derive_key, derive_phase_key, encrypt_data +from ..journal import ImmediateJournal +from ..util import dict_to_bytes, hexstr_to_bytes, bytes_to_hexstr, to_bytes +from spake2 import SPAKE2_Symmetric +from nacl.secret import SecretBox + +@implementer(IWordlist) +class FakeWordList(object): + def choose_words(self, length): + return "-".join(["word"] * length) + def get_completions(self, prefix): + self._get_completions_prefix = prefix + return self._completions + +class Dummy: + def __init__(self, name, events, iface, *meths): + self.name = name + self.events = events + if iface: + directlyProvides(self, iface) + for meth in meths: + self.mock(meth) + self.retval = None + def mock(self, meth): + def log(*args): + self.events.append(("%s.%s" % (self.name, meth),) + args) + return self.retval + setattr(self, meth, log) + +class Send(unittest.TestCase): + def build(self): + events = [] + s = _send.Send(u"side", timing.DebugTiming()) + m = Dummy("m", events, IMailbox, "add_message") + s.wire(m) + return s, m, events + + def test_send_first(self): + s, m, events = self.build() + s.send("phase1", b"msg") + self.assertEqual(events, []) + key = b"\x00" * 32 + nonce1 = b"\x00" * SecretBox.NONCE_SIZE + with mock.patch("nacl.utils.random", side_effect=[nonce1]) as r: + s.got_verified_key(key) + self.assertEqual(r.mock_calls, [mock.call(SecretBox.NONCE_SIZE)]) + #print(bytes_to_hexstr(events[0][2])) + enc1 = hexstr_to_bytes("00000000000000000000000000000000000000000000000022f1a46c3c3496423c394621a2a5a8cf275b08") + self.assertEqual(events, [("m.add_message", "phase1", enc1)]) + events[:] = [] + + nonce2 = b"\x02" * SecretBox.NONCE_SIZE + with mock.patch("nacl.utils.random", side_effect=[nonce2]) as r: + s.send("phase2", b"msg") + self.assertEqual(r.mock_calls, [mock.call(SecretBox.NONCE_SIZE)]) + enc2 = hexstr_to_bytes("0202020202020202020202020202020202020202020202026660337c3eac6513c0dac9818b62ef16d9cd7e") + self.assertEqual(events, [("m.add_message", "phase2", enc2)]) + + def test_key_first(self): + s, m, events = self.build() + key = b"\x00" * 32 + s.got_verified_key(key) + self.assertEqual(events, []) + + nonce1 = b"\x00" * SecretBox.NONCE_SIZE + with mock.patch("nacl.utils.random", side_effect=[nonce1]) as r: + s.send("phase1", b"msg") + self.assertEqual(r.mock_calls, [mock.call(SecretBox.NONCE_SIZE)]) + enc1 = hexstr_to_bytes("00000000000000000000000000000000000000000000000022f1a46c3c3496423c394621a2a5a8cf275b08") + self.assertEqual(events, [("m.add_message", "phase1", enc1)]) + events[:] = [] + + nonce2 = b"\x02" * SecretBox.NONCE_SIZE + with mock.patch("nacl.utils.random", side_effect=[nonce2]) as r: + s.send("phase2", b"msg") + self.assertEqual(r.mock_calls, [mock.call(SecretBox.NONCE_SIZE)]) + enc2 = hexstr_to_bytes("0202020202020202020202020202020202020202020202026660337c3eac6513c0dac9818b62ef16d9cd7e") + self.assertEqual(events, [("m.add_message", "phase2", enc2)]) + + + +class Order(unittest.TestCase): + def build(self): + events = [] + o = _order.Order(u"side", timing.DebugTiming()) + k = Dummy("k", events, IKey, "got_pake") + r = Dummy("r", events, IReceive, "got_message") + o.wire(k, r) + return o, k, r, events + + def test_in_order(self): + o, k, r, events = self.build() + o.got_message(u"side", u"pake", b"body") + self.assertEqual(events, [("k.got_pake", b"body")]) # right away + o.got_message(u"side", u"version", b"body") + o.got_message(u"side", u"1", b"body") + self.assertEqual(events, + [("k.got_pake", b"body"), + ("r.got_message", u"side", u"version", b"body"), + ("r.got_message", u"side", u"1", b"body"), + ]) + + def test_out_of_order(self): + o, k, r, events = self.build() + o.got_message(u"side", u"version", b"body") + self.assertEqual(events, []) # nothing yet + o.got_message(u"side", u"1", b"body") + self.assertEqual(events, []) # nothing yet + o.got_message(u"side", u"pake", b"body") + # got_pake is delivered first + self.assertEqual(events, + [("k.got_pake", b"body"), + ("r.got_message", u"side", u"version", b"body"), + ("r.got_message", u"side", u"1", b"body"), + ]) + +class Receive(unittest.TestCase): + def build(self): + events = [] + r = _receive.Receive(u"side", timing.DebugTiming()) + b = Dummy("b", events, IBoss, + "happy", "scared", "got_verifier", "got_message") + s = Dummy("s", events, ISend, "got_verified_key") + r.wire(b, s) + return r, b, s, events + + def test_good(self): + r, b, s, events = self.build() + key = b"key" + r.got_key(key) + self.assertEqual(events, []) + verifier = derive_key(key, b"wormhole:verifier") + phase1_key = derive_phase_key(key, u"side", u"phase1") + data1 = b"data1" + good_body = encrypt_data(phase1_key, data1) + r.got_message(u"side", u"phase1", good_body) + self.assertEqual(events, [("s.got_verified_key", key), + ("b.happy",), + ("b.got_verifier", verifier), + ("b.got_message", u"phase1", data1), + ]) + + phase2_key = derive_phase_key(key, u"side", u"phase2") + data2 = b"data2" + good_body = encrypt_data(phase2_key, data2) + r.got_message(u"side", u"phase2", good_body) + self.assertEqual(events, [("s.got_verified_key", key), + ("b.happy",), + ("b.got_verifier", verifier), + ("b.got_message", u"phase1", data1), + ("b.got_message", u"phase2", data2), + ]) + + def test_early_bad(self): + r, b, s, events = self.build() + key = b"key" + r.got_key(key) + self.assertEqual(events, []) + phase1_key = derive_phase_key(key, u"side", u"bad") + data1 = b"data1" + bad_body = encrypt_data(phase1_key, data1) + r.got_message(u"side", u"phase1", bad_body) + self.assertEqual(events, [("b.scared",), + ]) + + phase2_key = derive_phase_key(key, u"side", u"phase2") + data2 = b"data2" + good_body = encrypt_data(phase2_key, data2) + r.got_message(u"side", u"phase2", good_body) + self.assertEqual(events, [("b.scared",), + ]) + + def test_late_bad(self): + r, b, s, events = self.build() + key = b"key" + r.got_key(key) + self.assertEqual(events, []) + verifier = derive_key(key, b"wormhole:verifier") + phase1_key = derive_phase_key(key, u"side", u"phase1") + data1 = b"data1" + good_body = encrypt_data(phase1_key, data1) + r.got_message(u"side", u"phase1", good_body) + self.assertEqual(events, [("s.got_verified_key", key), + ("b.happy",), + ("b.got_verifier", verifier), + ("b.got_message", u"phase1", data1), + ]) + + phase2_key = derive_phase_key(key, u"side", u"bad") + data2 = b"data2" + bad_body = encrypt_data(phase2_key, data2) + r.got_message(u"side", u"phase2", bad_body) + self.assertEqual(events, [("s.got_verified_key", key), + ("b.happy",), + ("b.got_verifier", verifier), + ("b.got_message", u"phase1", data1), + ("b.scared",), + ]) + r.got_message(u"side", u"phase1", good_body) + r.got_message(u"side", u"phase2", bad_body) + self.assertEqual(events, [("s.got_verified_key", key), + ("b.happy",), + ("b.got_verifier", verifier), + ("b.got_message", u"phase1", data1), + ("b.scared",), + ]) + +class Key(unittest.TestCase): + def test_derive_errors(self): + self.assertRaises(TypeError, derive_key, 123, b"purpose") + self.assertRaises(TypeError, derive_key, b"key", 123) + self.assertRaises(TypeError, derive_key, b"key", b"purpose", "not len") + + def build(self): + events = [] + k = _key.Key(u"appid", {}, u"side", timing.DebugTiming()) + b = Dummy("b", events, IBoss, "scared", "got_key") + m = Dummy("m", events, IMailbox, "add_message") + r = Dummy("r", events, IReceive, "got_key") + k.wire(b, m, r) + return k, b, m, r, events + + def test_good(self): + k, b, m, r, events = self.build() + code = u"1-foo" + k.got_code(code) + self.assertEqual(len(events), 1) + self.assertEqual(events[0][:2], ("m.add_message", "pake")) + msg1_json = events[0][2].decode("utf-8") + events[:] = [] + msg1 = json.loads(msg1_json) + msg1_bytes = hexstr_to_bytes(msg1["pake_v1"]) + sp = SPAKE2_Symmetric(to_bytes(code), idSymmetric=to_bytes(u"appid")) + msg2_bytes = sp.start() + key2 = sp.finish(msg1_bytes) + msg2 = dict_to_bytes({"pake_v1": bytes_to_hexstr(msg2_bytes)}) + k.got_pake(msg2) + self.assertEqual(len(events), 3, events) + self.assertEqual(events[0], ("b.got_key", key2)) + self.assertEqual(events[1][:2], ("m.add_message", "version")) + self.assertEqual(events[2], ("r.got_key", key2)) + + def test_bad(self): + k, b, m, r, events = self.build() + code = u"1-foo" + k.got_code(code) + self.assertEqual(len(events), 1) + self.assertEqual(events[0][:2], ("m.add_message", "pake")) + pake_1_json = events[0][2].decode("utf-8") + pake_1 = json.loads(pake_1_json) + self.assertEqual(list(pake_1.keys()), ["pake_v1"]) # value is PAKE stuff + events[:] = [] + bad_pake_d = {"not_pake_v1": "stuff"} + k.got_pake(dict_to_bytes(bad_pake_d)) + self.assertEqual(events, [("b.scared",)]) + + def test_reversed(self): + # A receiver using input_code() will choose the nameplate first, then + # the rest of the code. Once the nameplate is selected, we'll claim + # it and open the mailbox, which will cause the senders PAKE to + # arrive before the code has been set. Key() is supposed to stash the + # PAKE message until the code is set (allowing the PAKE computation + # to finish). This test exercises that PAKE-then-code sequence. + k, b, m, r, events = self.build() + code = u"1-foo" + + sp = SPAKE2_Symmetric(to_bytes(code), idSymmetric=to_bytes(u"appid")) + msg2_bytes = sp.start() + msg2 = dict_to_bytes({"pake_v1": bytes_to_hexstr(msg2_bytes)}) + k.got_pake(msg2) + self.assertEqual(len(events), 0) + + k.got_code(code) + self.assertEqual(len(events), 4) + self.assertEqual(events[0][:2], ("m.add_message", "pake")) + msg1_json = events[0][2].decode("utf-8") + msg1 = json.loads(msg1_json) + msg1_bytes = hexstr_to_bytes(msg1["pake_v1"]) + key2 = sp.finish(msg1_bytes) + self.assertEqual(events[1], ("b.got_key", key2)) + self.assertEqual(events[2][:2], ("m.add_message", "version")) + self.assertEqual(events[3], ("r.got_key", key2)) + +class Code(unittest.TestCase): + def build(self): + events = [] + c = _code.Code(timing.DebugTiming()) + b = Dummy("b", events, IBoss, "got_code") + a = Dummy("a", events, IAllocator, "allocate") + n = Dummy("n", events, INameplate, "set_nameplate") + k = Dummy("k", events, IKey, "got_code") + i = Dummy("i", events, IInput, "start") + c.wire(b, a, n, k, i) + return c, b, a, n, k, i, events + + def test_set_code(self): + c, b, a, n, k, i, events = self.build() + c.set_code(u"1-code") + self.assertEqual(events, [("n.set_nameplate", u"1"), + ("b.got_code", u"1-code"), + ("k.got_code", u"1-code"), + ]) + + def test_allocate_code(self): + c, b, a, n, k, i, events = self.build() + wl = FakeWordList() + c.allocate_code(2, wl) + self.assertEqual(events, [("a.allocate", 2, wl)]) + events[:] = [] + c.allocated("1", "1-code") + self.assertEqual(events, [("n.set_nameplate", u"1"), + ("b.got_code", u"1-code"), + ("k.got_code", u"1-code"), + ]) + + def test_input_code(self): + c, b, a, n, k, i, events = self.build() + c.input_code() + self.assertEqual(events, [("i.start",)]) + events[:] = [] + c.got_nameplate("1") + self.assertEqual(events, [("n.set_nameplate", u"1"), + ]) + events[:] = [] + c.finished_input("1-code") + self.assertEqual(events, [("b.got_code", u"1-code"), + ("k.got_code", u"1-code"), + ]) + +class Input(unittest.TestCase): + def build(self): + events = [] + i = _input.Input(timing.DebugTiming()) + c = Dummy("c", events, ICode, "got_nameplate", "finished_input") + l = Dummy("l", events, ILister, "refresh") + i.wire(c, l) + return i, c, l, events + + def test_ignore_completion(self): + i, c, l, events = self.build() + helper = i.start() + self.assertIsInstance(helper, _input.Helper) + self.assertEqual(events, [("l.refresh",)]) + events[:] = [] + with self.assertRaises(errors.MustChooseNameplateFirstError): + helper.choose_words("word-word") + helper.choose_nameplate("1") + self.assertEqual(events, [("c.got_nameplate", "1")]) + events[:] = [] + with self.assertRaises(errors.AlreadyChoseNameplateError): + helper.choose_nameplate("2") + helper.choose_words("word-word") + with self.assertRaises(errors.AlreadyChoseWordsError): + helper.choose_words("word-word") + self.assertEqual(events, [("c.finished_input", "1-word-word")]) + + def test_with_completion(self): + i, c, l, events = self.build() + helper = i.start() + self.assertIsInstance(helper, _input.Helper) + self.assertEqual(events, [("l.refresh",)]) + events[:] = [] + d = helper.when_wordlist_is_available() + self.assertNoResult(d) + helper.refresh_nameplates() + self.assertEqual(events, [("l.refresh",)]) + events[:] = [] + with self.assertRaises(errors.MustChooseNameplateFirstError): + helper.get_word_completions("prefix") + i.got_nameplates({"1", "12", "34", "35", "367"}) + self.assertNoResult(d) + self.assertEqual(helper.get_nameplate_completions(""), + {"1-", "12-", "34-", "35-", "367-"}) + self.assertEqual(helper.get_nameplate_completions("1"), + {"1-", "12-"}) + self.assertEqual(helper.get_nameplate_completions("2"), set()) + self.assertEqual(helper.get_nameplate_completions("3"), + {"34-", "35-", "367-"}) + helper.choose_nameplate("34") + with self.assertRaises(errors.AlreadyChoseNameplateError): + helper.refresh_nameplates() + with self.assertRaises(errors.AlreadyChoseNameplateError): + helper.get_nameplate_completions("1") + self.assertEqual(events, [("c.got_nameplate", "34")]) + events[:] = [] + # no wordlist yet + self.assertNoResult(d) + self.assertEqual(helper.get_word_completions(""), set()) + wl = FakeWordList() + i.got_wordlist(wl) + self.assertEqual(self.successResultOf(d), None) + # a new Deferred should fire right away + d = helper.when_wordlist_is_available() + self.assertEqual(self.successResultOf(d), None) + + wl._completions = {"abc-", "abcd-", "ae-"} + self.assertEqual(helper.get_word_completions("a"), wl._completions) + self.assertEqual(wl._get_completions_prefix, "a") + with self.assertRaises(errors.AlreadyChoseNameplateError): + helper.refresh_nameplates() + with self.assertRaises(errors.AlreadyChoseNameplateError): + helper.get_nameplate_completions("1") + helper.choose_words("word-word") + with self.assertRaises(errors.AlreadyChoseWordsError): + helper.get_word_completions("prefix") + with self.assertRaises(errors.AlreadyChoseWordsError): + helper.choose_words("word-word") + self.assertEqual(events, [("c.finished_input", "34-word-word")]) + + + +class Lister(unittest.TestCase): + def build(self): + events = [] + l = _lister.Lister(timing.DebugTiming()) + rc = Dummy("rc", events, IRendezvousConnector, "tx_list") + i = Dummy("i", events, IInput, "got_nameplates") + l.wire(rc, i) + return l, rc, i, events + + def test_connect_first(self): + l, rc, i, events = self.build() + l.connected() + l.lost() + l.connected() + self.assertEqual(events, []) + l.refresh() + self.assertEqual(events, [("rc.tx_list",), + ]) + events[:] = [] + l.rx_nameplates({"1", "2", "3"}) + self.assertEqual(events, [("i.got_nameplates", {"1", "2", "3"}), + ]) + events[:] = [] + # now we're satisfied: disconnecting and reconnecting won't ask again + l.lost() + l.connected() + self.assertEqual(events, []) + + # but if we're told to refresh, we'll do so + l.refresh() + self.assertEqual(events, [("rc.tx_list",), + ]) + + def test_connect_first_ask_twice(self): + l, rc, i, events = self.build() + l.connected() + self.assertEqual(events, []) + l.refresh() + l.refresh() + self.assertEqual(events, [("rc.tx_list",), + ("rc.tx_list",), + ]) + l.rx_nameplates({"1", "2", "3"}) + self.assertEqual(events, [("rc.tx_list",), + ("rc.tx_list",), + ("i.got_nameplates", {"1", "2", "3"}), + ]) + l.rx_nameplates({"1" ,"2", "3", "4"}) + self.assertEqual(events, [("rc.tx_list",), + ("rc.tx_list",), + ("i.got_nameplates", {"1", "2", "3"}), + ("i.got_nameplates", {"1", "2", "3", "4"}), + ]) + + def test_reconnect(self): + l, rc, i, events = self.build() + l.refresh() + l.connected() + self.assertEqual(events, [("rc.tx_list",), + ]) + events[:] = [] + l.lost() + l.connected() + self.assertEqual(events, [("rc.tx_list",), + ]) + + def test_refresh_first(self): + l, rc, i, events = self.build() + l.refresh() + self.assertEqual(events, []) + l.connected() + self.assertEqual(events, [("rc.tx_list",), + ]) + l.rx_nameplates({"1", "2", "3"}) + self.assertEqual(events, [("rc.tx_list",), + ("i.got_nameplates", {"1", "2", "3"}), + ]) + + def test_unrefreshed(self): + l, rc, i, events = self.build() + self.assertEqual(events, []) + # we receive a spontaneous rx_nameplates, without asking + l.connected() + self.assertEqual(events, []) + l.rx_nameplates({"1", "2", "3"}) + self.assertEqual(events, [("i.got_nameplates", {"1", "2", "3"}), + ]) + +class Allocator(unittest.TestCase): + def build(self): + events = [] + a = _allocator.Allocator(timing.DebugTiming()) + rc = Dummy("rc", events, IRendezvousConnector, "tx_allocate") + c = Dummy("c", events, ICode, "allocated") + a.wire(rc, c) + return a, rc, c, events + + def test_no_allocation(self): + a, rc, c, events = self.build() + a.connected() + self.assertEqual(events, []) + + def test_allocate_first(self): + a, rc, c, events = self.build() + a.allocate(2, FakeWordList()) + self.assertEqual(events, []) + a.connected() + self.assertEqual(events, [("rc.tx_allocate",)]) + events[:] = [] + a.lost() + a.connected() + self.assertEqual(events, [("rc.tx_allocate",), + ]) + events[:] = [] + a.rx_allocated("1") + self.assertEqual(events, [("c.allocated", "1", "1-word-word"), + ]) + + def test_connect_first(self): + a, rc, c, events = self.build() + a.connected() + self.assertEqual(events, []) + a.allocate(2, FakeWordList()) + self.assertEqual(events, [("rc.tx_allocate",)]) + events[:] = [] + a.lost() + a.connected() + self.assertEqual(events, [("rc.tx_allocate",), + ]) + events[:] = [] + a.rx_allocated("1") + self.assertEqual(events, [("c.allocated", "1", "1-word-word"), + ]) + +class Nameplate(unittest.TestCase): + def build(self): + events = [] + n = _nameplate.Nameplate() + m = Dummy("m", events, IMailbox, "got_mailbox") + i = Dummy("i", events, IInput, "got_wordlist") + rc = Dummy("rc", events, IRendezvousConnector, "tx_claim", "tx_release") + t = Dummy("t", events, ITerminator, "nameplate_done") + n.wire(m, i, rc, t) + return n, m, i, rc, t, events + + def test_set_first(self): + # connection remains up throughout + n, m, i, rc, t, events = self.build() + n.set_nameplate("1") + self.assertEqual(events, []) + n.connected() + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_connect_first(self): + # connection remains up throughout + n, m, i, rc, t, events = self.build() + n.connected() + self.assertEqual(events, []) + + n.set_nameplate("1") + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_reconnect_while_claiming(self): + # connection bounced while waiting for rx_claimed + n, m, i, rc, t, events = self.build() + n.connected() + self.assertEqual(events, []) + + n.set_nameplate("1") + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + n.lost() + n.connected() + self.assertEqual(events, [("rc.tx_claim", "1")]) + + def test_reconnect_while_claimed(self): + # connection bounced while claimed: no retransmits should be sent + n, m, i, rc, t, events = self.build() + n.connected() + self.assertEqual(events, []) + + n.set_nameplate("1") + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.lost() + n.connected() + self.assertEqual(events, []) + + def test_reconnect_while_releasing(self): + # connection bounced while waiting for rx_released + n, m, i, rc, t, events = self.build() + n.connected() + self.assertEqual(events, []) + + n.set_nameplate("1") + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.lost() + n.connected() + self.assertEqual(events, [("rc.tx_release", "1")]) + + def test_reconnect_while_done(self): + # connection bounces after we're done + n, m, i, rc, t, events = self.build() + n.connected() + self.assertEqual(events, []) + + n.set_nameplate("1") + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + events[:] = [] + + n.lost() + n.connected() + self.assertEqual(events, []) + + def test_close_while_idle(self): + n, m, i, rc, t, events = self.build() + n.close() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_idle_connected(self): + n, m, i, rc, t, events = self.build() + n.connected() + self.assertEqual(events, []) + n.close() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_unclaimed(self): + n, m, i, rc, t, events = self.build() + n.set_nameplate("1") + n.close() # before ever being connected + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_claiming(self): + n, m, i, rc, t, events = self.build() + n.set_nameplate("1") + self.assertEqual(events, []) + n.connected() + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + n.close() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_claiming_but_disconnected(self): + n, m, i, rc, t, events = self.build() + n.set_nameplate("1") + self.assertEqual(events, []) + n.connected() + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + n.lost() + n.close() + self.assertEqual(events, []) + # we're now waiting for a connection, so we can release the nameplate + n.connected() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_claimed(self): + n, m, i, rc, t, events = self.build() + n.set_nameplate("1") + self.assertEqual(events, []) + n.connected() + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.close() + # this path behaves just like a deliberate release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_claimed_but_disconnected(self): + n, m, i, rc, t, events = self.build() + n.set_nameplate("1") + self.assertEqual(events, []) + n.connected() + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.lost() + n.close() + # we're now waiting for a connection, so we can release the nameplate + n.connected() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_releasing(self): + n, m, i, rc, t, events = self.build() + n.set_nameplate("1") + self.assertEqual(events, []) + n.connected() + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.close() # ignored, we're already on our way out the door + self.assertEqual(events, []) + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_releasing_but_disconnecteda(self): + n, m, i, rc, t, events = self.build() + n.set_nameplate("1") + self.assertEqual(events, []) + n.connected() + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.lost() + n.close() + # we must retransmit the tx_release when we reconnect + self.assertEqual(events, []) + + n.connected() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + + def test_close_while_done(self): + # connection remains up throughout + n, m, i, rc, t, events = self.build() + n.connected() + self.assertEqual(events, []) + + n.set_nameplate("1") + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + events[:] = [] + + n.close() # NOP + self.assertEqual(events, []) + + def test_close_while_done_but_disconnected(self): + # connection remains up throughout + n, m, i, rc, t, events = self.build() + n.connected() + self.assertEqual(events, []) + + n.set_nameplate("1") + self.assertEqual(events, [("rc.tx_claim", "1")]) + events[:] = [] + + wl = object() + with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): + n.rx_claimed("mbox1") + self.assertEqual(events, [("i.got_wordlist", wl), + ("m.got_mailbox", "mbox1"), + ]) + events[:] = [] + + n.release() + self.assertEqual(events, [("rc.tx_release", "1")]) + events[:] = [] + + n.rx_released() + self.assertEqual(events, [("t.nameplate_done",)]) + events[:] = [] + + n.lost() + n.close() # NOP + self.assertEqual(events, []) + +class Mailbox(unittest.TestCase): + def build(self): + events = [] + m = _mailbox.Mailbox("side1") + n = Dummy("n", events, INameplate, "release") + rc = Dummy("rc", events, IRendezvousConnector, + "tx_add", "tx_open", "tx_close") + o = Dummy("o", events, IOrder, "got_message") + t = Dummy("t", events, ITerminator, "mailbox_done") + m.wire(n, rc, o, t) + return m, n, rc, o, t, events + + # TODO: test moods + + def assert_events(self, events, initial_events, tx_add_events): + self.assertEqual(len(events), len(initial_events)+len(tx_add_events), + events) + self.assertEqual(events[:len(initial_events)], initial_events) + self.assertEqual(set(events[len(initial_events):]), tx_add_events) + + def test_connect_first(self): # connect before got_mailbox + m, n, rc, o, t, events = self.build() + m.add_message("phase1", b"msg1") + self.assertEqual(events, []) + + m.connected() + self.assertEqual(events, []) + + m.got_mailbox("mbox1") + self.assertEqual(events, [("rc.tx_open", "mbox1"), + ("rc.tx_add", "phase1", b"msg1")]) + events[:] = [] + + m.add_message("phase2", b"msg2") + self.assertEqual(events, [("rc.tx_add", "phase2", b"msg2")]) + events[:] = [] + + # bouncing the connection should retransmit everything, even the open() + m.lost() + self.assertEqual(events, []) + # and messages sent while here should be queued + m.add_message("phase3", b"msg3") + self.assertEqual(events, []) + + m.connected() + # the other messages are allowed to be sent in any order + self.assert_events(events, [("rc.tx_open", "mbox1")], + { ("rc.tx_add", "phase1", b"msg1"), + ("rc.tx_add", "phase2", b"msg2"), + ("rc.tx_add", "phase3", b"msg3"), + }) + events[:] = [] + + m.rx_message("side1", "phase1", b"msg1") # echo of our message, dequeue + self.assertEqual(events, []) + + m.lost() + m.connected() + self.assert_events(events, [("rc.tx_open", "mbox1")], + {("rc.tx_add", "phase2", b"msg2"), + ("rc.tx_add", "phase3", b"msg3"), + }) + events[:] = [] + + # a new message from the peer gets delivered, and the Nameplate is + # released since the message proves that our peer opened the Mailbox + # and therefore no longer needs the Nameplate + m.rx_message("side2", "phase1", b"msg1them") # new message from peer + self.assertEqual(events, [("n.release",), + ("o.got_message", "side2", "phase1", b"msg1them"), + ]) + events[:] = [] + + # we de-duplicate peer messages, but still re-release the nameplate + # since Nameplate is smart enough to ignore that + m.rx_message("side2", "phase1", b"msg1them") + self.assertEqual(events, [("n.release",), + ]) + events[:] = [] + + m.close("happy") + self.assertEqual(events, [("rc.tx_close", "mbox1", "happy")]) + events[:] = [] + + # while closing, we ignore a lot + m.add_message("phase-late", b"late") + m.rx_message("side1", "phase2", b"msg2") + m.close("happy") + self.assertEqual(events, []) + + # bouncing the connection forces a retransmit of the tx_close + m.lost() + self.assertEqual(events, []) + m.connected() + self.assertEqual(events, [("rc.tx_close", "mbox1", "happy")]) + events[:] = [] + + m.rx_closed() + self.assertEqual(events, [("t.mailbox_done",)]) + events[:] = [] + + # while closed, we ignore everything + m.add_message("phase-late", b"late") + m.rx_message("side1", "phase2", b"msg2") + m.close("happy") + m.lost() + m.connected() + self.assertEqual(events, []) + + def test_mailbox_first(self): # got_mailbox before connect + m, n, rc, o, t, events = self.build() + m.add_message("phase1", b"msg1") + self.assertEqual(events, []) + + m.got_mailbox("mbox1") + m.add_message("phase2", b"msg2") + self.assertEqual(events, []) + + m.connected() + + self.assert_events(events, [("rc.tx_open", "mbox1")], + { ("rc.tx_add", "phase1", b"msg1"), + ("rc.tx_add", "phase2", b"msg2"), + }) + + def test_close_while_idle(self): + m, n, rc, o, t, events = self.build() + m.close("happy") + self.assertEqual(events, [("t.mailbox_done",)]) + + def test_close_while_idle_but_connected(self): + m, n, rc, o, t, events = self.build() + m.connected() + m.close("happy") + self.assertEqual(events, [("t.mailbox_done",)]) + + def test_close_while_mailbox_disconnected(self): + m, n, rc, o, t, events = self.build() + m.got_mailbox("mbox1") + m.close("happy") + self.assertEqual(events, [("t.mailbox_done",)]) + + def test_close_while_reconnecting(self): + m, n, rc, o, t, events = self.build() + m.got_mailbox("mbox1") + m.connected() + self.assertEqual(events, [("rc.tx_open", "mbox1")]) + events[:] = [] + + m.lost() + self.assertEqual(events, []) + m.close("happy") + self.assertEqual(events, []) + # we now wait to connect, so we can send the tx_close + + m.connected() + self.assertEqual(events, [("rc.tx_close", "mbox1", "happy")]) + events[:] = [] + + m.rx_closed() + self.assertEqual(events, [("t.mailbox_done",)]) + events[:] = [] + +class Terminator(unittest.TestCase): + def build(self): + events = [] + t = _terminator.Terminator() + b = Dummy("b", events, IBoss, "closed") + rc = Dummy("rc", events, IRendezvousConnector, "stop") + n = Dummy("n", events, INameplate, "close") + m = Dummy("m", events, IMailbox, "close") + t.wire(b, rc, n, m) + return t, b, rc, n, m, events + + # there are three events, and we need to test all orderings of them + def _do_test(self, ev1, ev2, ev3): + t, b, rc, n, m, events = self.build() + input_events = {"mailbox": lambda: t.mailbox_done(), + "nameplate": lambda: t.nameplate_done(), + "close": lambda: t.close("happy"), + } + close_events = [("n.close",), + ("m.close", "happy"), + ] + + input_events[ev1]() + expected = [] + if ev1 == "close": + expected.extend(close_events) + self.assertEqual(events, expected) + events[:] = [] + + input_events[ev2]() + expected = [] + if ev2 == "close": + expected.extend(close_events) + self.assertEqual(events, expected) + events[:] = [] + + input_events[ev3]() + expected = [] + if ev3 == "close": + expected.extend(close_events) + expected.append(("rc.stop",)) + self.assertEqual(events, expected) + events[:] = [] + + t.stopped() + self.assertEqual(events, [("b.closed",)]) + + def test_terminate(self): + self._do_test("mailbox", "nameplate", "close") + self._do_test("mailbox", "close", "nameplate") + self._do_test("nameplate", "mailbox", "close") + self._do_test("nameplate", "close", "mailbox") + self._do_test("close", "nameplate", "mailbox") + self._do_test("close", "mailbox", "nameplate") + + # TODO: test moods + +class MockBoss(_boss.Boss): + def __attrs_post_init__(self): + #self._build_workers() + self._init_other_state() + +class Boss(unittest.TestCase): + def build(self): + events = [] + wormhole = Dummy("w", events, None, + "got_code", "got_key", "got_verifier", "got_version", + "received", "closed") + self._welcome_handler = mock.Mock() + versions = {"app": "version1"} + reactor = None + journal = ImmediateJournal() + tor_manager = None + b = MockBoss(wormhole, "side", "url", "appid", versions, + self._welcome_handler, reactor, journal, tor_manager, + timing.DebugTiming()) + b._T = Dummy("t", events, ITerminator, "close") + b._S = Dummy("s", events, ISend, "send") + b._RC = Dummy("rc", events, IRendezvousConnector, "start") + b._C = Dummy("c", events, ICode, + "allocate_code", "input_code", "set_code") + return b, events + + def test_basic(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + b.got_code("1-code") + self.assertEqual(events, [("w.got_code", "1-code")]) + events[:] = [] + + b.rx_welcome("welcome") + self.assertEqual(self._welcome_handler.mock_calls, [mock.call("welcome")]) + + # pretend a peer message was correctly decrypted + b.got_key(b"key") + b.happy() + b.got_verifier(b"verifier") + b.got_message("version", b"{}") + b.got_message("0", b"msg1") + self.assertEqual(events, [("w.got_key", b"key"), + ("w.got_verifier", b"verifier"), + ("w.got_version", {}), + ("w.received", b"msg1"), + ]) + events[:] = [] + + b.send(b"msg2") + self.assertEqual(events, [("s.send", "0", b"msg2")]) + events[:] = [] + + b.close() + self.assertEqual(events, [("t.close", "happy")]) + events[:] = [] + + b.closed() + self.assertEqual(events, [("w.closed", "happy")]) + + def test_lonely(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + b.got_code("1-code") + self.assertEqual(events, [("w.got_code", "1-code")]) + events[:] = [] + + b.close() + self.assertEqual(events, [("t.close", "lonely")]) + events[:] = [] + + b.closed() + self.assertEqual(len(events), 1, events) + self.assertEqual(events[0][0], "w.closed") + self.assertIsInstance(events[0][1], errors.LonelyError) + + def test_server_error(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + orig = {} + b.rx_error("server-error-msg", orig) + self.assertEqual(events, [("t.close", "errory")]) + events[:] = [] + + b.closed() + self.assertEqual(len(events), 1, events) + self.assertEqual(events[0][0], "w.closed") + self.assertIsInstance(events[0][1], errors.ServerError) + self.assertEqual(events[0][1].args[0], "server-error-msg") + + def test_internal_error(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + b.error(ValueError("catch me")) + self.assertEqual(len(events), 1, events) + self.assertEqual(events[0][0], "w.closed") + self.assertIsInstance(events[0][1], ValueError) + self.assertEqual(events[0][1].args[0], "catch me") + + def test_close_early(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + b.close() # before even w.got_code + self.assertEqual(events, [("t.close", "lonely")]) + events[:] = [] + + b.closed() + self.assertEqual(len(events), 1, events) + self.assertEqual(events[0][0], "w.closed") + self.assertIsInstance(events[0][1], errors.LonelyError) + + def test_error_while_closing(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + b.close() + self.assertEqual(events, [("t.close", "lonely")]) + events[:] = [] + + b.error(ValueError("oops")) + self.assertEqual(len(events), 1, events) + self.assertEqual(events[0][0], "w.closed") + self.assertIsInstance(events[0][1], ValueError) + + def test_scary_version(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + b.got_code("1-code") + self.assertEqual(events, [("w.got_code", "1-code")]) + events[:] = [] + + b.scared() + self.assertEqual(events, [("t.close", "scary")]) + events[:] = [] + + b.closed() + self.assertEqual(len(events), 1, events) + self.assertEqual(events[0][0], "w.closed") + self.assertIsInstance(events[0][1], errors.WrongPasswordError) + + def test_scary_phase(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + b.got_code("1-code") + self.assertEqual(events, [("w.got_code", "1-code")]) + events[:] = [] + + b.happy() # phase=version + + b.scared() # phase=0 + self.assertEqual(events, [("t.close", "scary")]) + events[:] = [] + + b.closed() + self.assertEqual(len(events), 1, events) + self.assertEqual(events[0][0], "w.closed") + self.assertIsInstance(events[0][1], errors.WrongPasswordError) + + def test_unknown_phase(self): + b, events = self.build() + b.set_code("1-code") + self.assertEqual(events, [("c.set_code", "1-code")]) + events[:] = [] + + b.got_code("1-code") + self.assertEqual(events, [("w.got_code", "1-code")]) + events[:] = [] + + b.happy() # phase=version + + b.got_message("unknown-phase", b"spooky") + self.assertEqual(events, []) + + self.flushLoggedErrors(errors._UnknownPhaseError) + + def test_set_code_bad_format(self): + b, events = self.build() + with self.assertRaises(errors.KeyFormatError): + b.set_code("1 code") + + def test_set_code_bad_twice(self): + b, events = self.build() + b.set_code("1-code") + with self.assertRaises(errors.OnlyOneCodeError): + b.set_code("1-code") + + def test_input_code(self): + b, events = self.build() + b._C.retval = "helper" + helper = b.input_code() + self.assertEqual(events, [("c.input_code",)]) + self.assertEqual(helper, "helper") + with self.assertRaises(errors.OnlyOneCodeError): + b.input_code() + + def test_allocate_code(self): + b, events = self.build() + wl = object() + with mock.patch("wormhole._boss.PGPWordList", return_value=wl): + b.allocate_code(3) + self.assertEqual(events, [("c.allocate_code", 3, wl)]) + with self.assertRaises(errors.OnlyOneCodeError): + b.allocate_code(3) + + + + +# TODO +# #Send +# #Mailbox +# #Nameplate +# #Terminator +# Boss +# RendezvousConnector (not a state machine) +# #Input: exercise helper methods +# #wordlist +# test idempotency / at-most-once where applicable diff --git a/src/wormhole/test/test_rlcompleter.py b/src/wormhole/test/test_rlcompleter.py new file mode 100644 index 0000000..f21e55f --- /dev/null +++ b/src/wormhole/test/test_rlcompleter.py @@ -0,0 +1,365 @@ +from __future__ import print_function, absolute_import, unicode_literals +import mock +from itertools import count +from twisted.trial import unittest +from twisted.internet import reactor +from twisted.internet.defer import inlineCallbacks +from twisted.internet.threads import deferToThread +from .._rlcompleter import (input_with_completion, + _input_code_with_completion, + CodeInputter, warn_readline) +from ..errors import KeyFormatError, AlreadyInputNameplateError +APPID = "appid" + +class Input(unittest.TestCase): + @inlineCallbacks + def test_wrapper(self): + helper = object() + trueish = object() + with mock.patch("wormhole._rlcompleter._input_code_with_completion", + return_value=trueish) as m: + used_completion = yield input_with_completion("prompt:", helper, + reactor) + self.assertIs(used_completion, trueish) + self.assertEqual(m.mock_calls, + [mock.call("prompt:", helper, reactor)]) + # note: if this test fails, the warn_readline() message will probably + # get written to stderr + +class Sync(unittest.TestCase): + # exercise _input_code_with_completion, which uses the blocking builtin + # "input()" function, hence _input_code_with_completion is usually in a + # thread with deferToThread + + @mock.patch("wormhole._rlcompleter.CodeInputter") + @mock.patch("wormhole._rlcompleter.readline", + __doc__="I am GNU readline") + @mock.patch("wormhole._rlcompleter.input", return_value="code") + def test_readline(self, input, readline, ci): + c = mock.Mock(name="inhibit parenting") + c.completer = object() + trueish = object() + c.used_completion = trueish + ci.configure_mock(return_value=c) + prompt = object() + input_helper = object() + reactor = object() + used = _input_code_with_completion(prompt, input_helper, reactor) + self.assertIs(used, trueish) + self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) + self.assertEqual(c.mock_calls, [mock.call.finish("code")]) + self.assertEqual(input.mock_calls, [mock.call(prompt)]) + self.assertEqual(readline.mock_calls, + [mock.call.parse_and_bind("tab: complete"), + mock.call.set_completer(c.completer), + mock.call.set_completer_delims(""), + ]) + + @mock.patch("wormhole._rlcompleter.CodeInputter") + @mock.patch("wormhole._rlcompleter.readline") + @mock.patch("wormhole._rlcompleter.input", return_value="code") + def test_readline_no_docstring(self, input, readline, ci): + del readline.__doc__ # when in doubt, it assumes GNU readline + c = mock.Mock(name="inhibit parenting") + c.completer = object() + trueish = object() + c.used_completion = trueish + ci.configure_mock(return_value=c) + prompt = object() + input_helper = object() + reactor = object() + used = _input_code_with_completion(prompt, input_helper, reactor) + self.assertIs(used, trueish) + self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) + self.assertEqual(c.mock_calls, [mock.call.finish("code")]) + self.assertEqual(input.mock_calls, [mock.call(prompt)]) + self.assertEqual(readline.mock_calls, + [mock.call.parse_and_bind("tab: complete"), + mock.call.set_completer(c.completer), + mock.call.set_completer_delims(""), + ]) + + @mock.patch("wormhole._rlcompleter.CodeInputter") + @mock.patch("wormhole._rlcompleter.readline", + __doc__="I am libedit") + @mock.patch("wormhole._rlcompleter.input", return_value="code") + def test_libedit(self, input, readline, ci): + c = mock.Mock(name="inhibit parenting") + c.completer = object() + trueish = object() + c.used_completion = trueish + ci.configure_mock(return_value=c) + prompt = object() + input_helper = object() + reactor = object() + used = _input_code_with_completion(prompt, input_helper, reactor) + self.assertIs(used, trueish) + self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) + self.assertEqual(c.mock_calls, [mock.call.finish("code")]) + self.assertEqual(input.mock_calls, [mock.call(prompt)]) + self.assertEqual(readline.mock_calls, + [mock.call.parse_and_bind("bind ^I rl_complete"), + mock.call.set_completer(c.completer), + mock.call.set_completer_delims(""), + ]) + + @mock.patch("wormhole._rlcompleter.CodeInputter") + @mock.patch("wormhole._rlcompleter.readline", None) + @mock.patch("wormhole._rlcompleter.input", return_value="code") + def test_no_readline(self, input, ci): + c = mock.Mock(name="inhibit parenting") + c.completer = object() + trueish = object() + c.used_completion = trueish + ci.configure_mock(return_value=c) + prompt = object() + input_helper = object() + reactor = object() + used = _input_code_with_completion(prompt, input_helper, reactor) + self.assertIs(used, trueish) + self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) + self.assertEqual(c.mock_calls, [mock.call.finish("code")]) + self.assertEqual(input.mock_calls, [mock.call(prompt)]) + + @mock.patch("wormhole._rlcompleter.CodeInputter") + @mock.patch("wormhole._rlcompleter.readline", None) + @mock.patch("wormhole._rlcompleter.input", return_value=b"code") + def test_bytes(self, input, ci): + c = mock.Mock(name="inhibit parenting") + c.completer = object() + trueish = object() + c.used_completion = trueish + ci.configure_mock(return_value=c) + prompt = object() + input_helper = object() + reactor = object() + used = _input_code_with_completion(prompt, input_helper, reactor) + self.assertIs(used, trueish) + self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) + self.assertEqual(c.mock_calls, [mock.call.finish(u"code")]) + self.assertEqual(input.mock_calls, [mock.call(prompt)]) + +def get_completions(c, prefix): + completions = [] + for state in count(0): + text = c.completer(prefix, state) + if text is None: + return completions + completions.append(text) + +class Completion(unittest.TestCase): + def test_simple(self): + # no actual completion + helper = mock.Mock() + c = CodeInputter(helper, "reactor") + c.finish("1-code-ghost") + self.assertFalse(c.used_completion) + self.assertEqual(helper.mock_calls, + [mock.call.choose_nameplate("1"), + mock.call.choose_words("code-ghost")]) + + @mock.patch("wormhole._rlcompleter.readline", + get_completion_type=mock.Mock(return_value=0)) + def test_call(self, readline): + # check that it calls _commit_and_build_completions correctly + helper = mock.Mock() + c = CodeInputter(helper, "reactor") + + # pretend nameplates: 1, 12, 34 + + # first call will be with "1" + cabc = mock.Mock(return_value=["1", "12"]) + c._commit_and_build_completions = cabc + + self.assertEqual(get_completions(c, "1"), ["1", "12"]) + self.assertEqual(cabc.mock_calls, [mock.call("1")]) + + # then "12" + cabc.reset_mock() + cabc.configure_mock(return_value=["12"]) + self.assertEqual(get_completions(c, "12"), ["12"]) + self.assertEqual(cabc.mock_calls, [mock.call("12")]) + + # now we have three "a" words: "and", "ark", "aaah!zombies!!" + cabc.reset_mock() + cabc.configure_mock(return_value=["aargh", "ark", "aaah!zombies!!"]) + self.assertEqual(get_completions(c, "12-a"), + ["aargh", "ark", "aaah!zombies!!"]) + self.assertEqual(cabc.mock_calls, [mock.call("12-a")]) + + cabc.reset_mock() + cabc.configure_mock(return_value=["aargh", "aaah!zombies!!"]) + self.assertEqual(get_completions(c, "12-aa"), + ["aargh", "aaah!zombies!!"]) + self.assertEqual(cabc.mock_calls, [mock.call("12-aa")]) + + cabc.reset_mock() + cabc.configure_mock(return_value=["aaah!zombies!!"]) + self.assertEqual(get_completions(c, "12-aaa"), ["aaah!zombies!!"]) + self.assertEqual(cabc.mock_calls, [mock.call("12-aaa")]) + + c.finish("1-code") + self.assert_(c.used_completion) + + def test_wrap_error(self): + helper = mock.Mock() + c = CodeInputter(helper, "reactor") + c._wrapped_completer = mock.Mock(side_effect=ValueError("oops")) + with mock.patch("wormhole._rlcompleter.traceback") as traceback: + with mock.patch("wormhole._rlcompleter.print") as mock_print: + with self.assertRaises(ValueError) as e: + c.completer("text", 0) + self.assertEqual(traceback.mock_calls, [mock.call.print_exc()]) + self.assertEqual(mock_print.mock_calls, + [mock.call("completer exception: oops")]) + self.assertEqual(str(e.exception), "oops") + + @inlineCallbacks + def test_build_completions(self): + rn = mock.Mock() + # InputHelper.get_nameplate_completions returns just the suffixes + gnc = mock.Mock() # get_nameplate_completions + cn = mock.Mock() # choose_nameplate + gwc = mock.Mock() # get_word_completions + cw = mock.Mock() # choose_words + helper = mock.Mock(refresh_nameplates=rn, + get_nameplate_completions=gnc, + choose_nameplate=cn, + get_word_completions=gwc, + choose_words=cw, + ) + # this needs a real reactor, for blockingCallFromThread + c = CodeInputter(helper, reactor) + cabc = c._commit_and_build_completions + + # in this test, we pretend that nameplates 1,12,34 are active. + + # 43 TAB -> nothing (and refresh_nameplates) + gnc.configure_mock(return_value=[]) + matches = yield deferToThread(cabc, "43") + self.assertEqual(matches, []) + self.assertEqual(rn.mock_calls, [mock.call()]) + self.assertEqual(gnc.mock_calls, [mock.call("43")]) + self.assertEqual(cn.mock_calls, []) + rn.reset_mock() + gnc.reset_mock() + + # 1 TAB -> 1-, 12- (and refresh_nameplates) + gnc.configure_mock(return_value=["1-", "12-"]) + matches = yield deferToThread(cabc, "1") + self.assertEqual(matches, ["1-", "12-"]) + self.assertEqual(rn.mock_calls, [mock.call()]) + self.assertEqual(gnc.mock_calls, [mock.call("1")]) + self.assertEqual(cn.mock_calls, []) + rn.reset_mock() + gnc.reset_mock() + + # 12 TAB -> 12- (and refresh_nameplates) + # I wouldn't mind if it didn't refresh the nameplates here, but meh + gnc.configure_mock(return_value=["12-"]) + matches = yield deferToThread(cabc, "12") + self.assertEqual(matches, ["12-"]) + self.assertEqual(rn.mock_calls, [mock.call()]) + self.assertEqual(gnc.mock_calls, [mock.call("12")]) + self.assertEqual(cn.mock_calls, []) + rn.reset_mock() + gnc.reset_mock() + + # 12- TAB -> 12- {all words} (claim, no refresh) + gnc.configure_mock(return_value=["12-"]) + gwc.configure_mock(return_value=["and-", "ark-", "aaah!zombies!!-"]) + matches = yield deferToThread(cabc, "12-") + self.assertEqual(matches, ["12-aaah!zombies!!-", "12-and-", "12-ark-"]) + self.assertEqual(rn.mock_calls, []) + self.assertEqual(gnc.mock_calls, []) + self.assertEqual(cn.mock_calls, [mock.call("12")]) + self.assertEqual(gwc.mock_calls, [mock.call("")]) + cn.reset_mock() + gwc.reset_mock() + + # TODO: another path with "3 TAB" then "34-an TAB", so the claim + # happens in the second call (and it waits for the wordlist) + + # 12-a TAB -> 12-and- 12-ark- 12-aaah!zombies!!- + gnc.configure_mock(side_effect=ValueError) + gwc.configure_mock(return_value=["and-", "ark-", "aaah!zombies!!-"]) + matches = yield deferToThread(cabc, "12-a") + # matches are always sorted + self.assertEqual(matches, ["12-aaah!zombies!!-", "12-and-", "12-ark-"]) + self.assertEqual(rn.mock_calls, []) + self.assertEqual(gnc.mock_calls, []) + self.assertEqual(cn.mock_calls, []) + self.assertEqual(gwc.mock_calls, [mock.call("a")]) + gwc.reset_mock() + + # 12-and-b TAB -> 12-and-bat 12-and-bet 12-and-but + gnc.configure_mock(side_effect=ValueError) + # wordlist knows the code length, so doesn't add hyphens to these + gwc.configure_mock(return_value=["and-bat", "and-bet", "and-but"]) + matches = yield deferToThread(cabc, "12-and-b") + self.assertEqual(matches, ["12-and-bat", "12-and-bet", "12-and-but"]) + self.assertEqual(rn.mock_calls, []) + self.assertEqual(gnc.mock_calls, []) + self.assertEqual(cn.mock_calls, []) + self.assertEqual(gwc.mock_calls, [mock.call("and-b")]) + gwc.reset_mock() + + c.finish("12-and-bat") + self.assertEqual(cw.mock_calls, [mock.call("and-bat")]) + + def test_incomplete_code(self): + helper = mock.Mock() + c = CodeInputter(helper, "reactor") + with self.assertRaises(KeyFormatError) as e: + c.finish("1") + self.assertEqual(str(e.exception), "incomplete wormhole code") + + @inlineCallbacks + def test_rollback_nameplate_during_completion(self): + helper = mock.Mock() + gwc = helper.get_word_completions = mock.Mock() + gwc.configure_mock(return_value=["code", "court"]) + c = CodeInputter(helper, reactor) + cabc = c._commit_and_build_completions + matches = yield deferToThread(cabc, "1-co") # this commits us to 1- + self.assertEqual(helper.mock_calls, + [mock.call.choose_nameplate("1"), + mock.call.when_wordlist_is_available(), + mock.call.get_word_completions("co")]) + self.assertEqual(matches, ["1-code", "1-court"]) + helper.reset_mock() + with self.assertRaises(AlreadyInputNameplateError) as e: + yield deferToThread(cabc, "2-co") + self.assertEqual(str(e.exception), + "nameplate (1-) already entered, cannot go back") + self.assertEqual(helper.mock_calls, []) + + @inlineCallbacks + def test_rollback_nameplate_during_finish(self): + helper = mock.Mock() + gwc = helper.get_word_completions = mock.Mock() + gwc.configure_mock(return_value=["code", "court"]) + c = CodeInputter(helper, reactor) + cabc = c._commit_and_build_completions + matches = yield deferToThread(cabc, "1-co") # this commits us to 1- + self.assertEqual(helper.mock_calls, + [mock.call.choose_nameplate("1"), + mock.call.when_wordlist_is_available(), + mock.call.get_word_completions("co")]) + self.assertEqual(matches, ["1-code", "1-court"]) + helper.reset_mock() + with self.assertRaises(AlreadyInputNameplateError) as e: + c.finish("2-code") + self.assertEqual(str(e.exception), + "nameplate (1-) already entered, cannot go back") + self.assertEqual(helper.mock_calls, []) + + @mock.patch("wormhole._rlcompleter.stderr") + def test_warn_readline(self, stderr): + # there is no good way to test that this function gets used at the + # right time, since it involves a reactor and a "system event + # trigger", but let's at least make sure it's invocable + warn_readline() + expected ="\nCommand interrupted: please press Return to quit" + self.assertEqual(stderr.mock_calls, [mock.call.write(expected), + mock.call.write("\n")]) diff --git a/src/wormhole/test/test_wordlist.py b/src/wormhole/test/test_wordlist.py new file mode 100644 index 0000000..6b86cdb --- /dev/null +++ b/src/wormhole/test/test_wordlist.py @@ -0,0 +1,31 @@ +from __future__ import print_function, unicode_literals +import mock +from twisted.trial import unittest +from .._wordlist import PGPWordList + +class Completions(unittest.TestCase): + def test_completions(self): + wl = PGPWordList() + gc = wl.get_completions + self.assertEqual(gc("ar", 2), {"armistice-", "article-"}) + self.assertEqual(gc("armis", 2), {"armistice-"}) + self.assertEqual(gc("armistice", 2), {"armistice-"}) + lots = gc("armistice-", 2) + self.assertEqual(len(lots), 256, lots) + first = list(lots)[0] + self.assert_(first.startswith("armistice-"), first) + self.assertEqual(gc("armistice-ba", 2), + {"armistice-baboon", "armistice-backfield", + "armistice-backward", "armistice-banjo"}) + self.assertEqual(gc("armistice-ba", 3), + {"armistice-baboon-", "armistice-backfield-", + "armistice-backward-", "armistice-banjo-"}) + self.assertEqual(gc("armistice-baboon", 2), {"armistice-baboon"}) + self.assertEqual(gc("armistice-baboon", 3), {"armistice-baboon-"}) + self.assertEqual(gc("armistice-baboon", 4), {"armistice-baboon-"}) + +class Choose(unittest.TestCase): + def test_choose_words(self): + wl = PGPWordList() + with mock.patch("os.urandom", side_effect=[b"\x04", b"\x10"]): + self.assertEqual(wl.choose_words(2), "alkali-assume") diff --git a/src/wormhole/test/test_wormhole.py b/src/wormhole/test/test_wormhole.py index e7b4f5c..0dc8eef 100644 --- a/src/wormhole/test/test_wormhole.py +++ b/src/wormhole/test/test_wormhole.py @@ -1,19 +1,14 @@ from __future__ import print_function, unicode_literals -import os, json, re, gc, io -from binascii import hexlify, unhexlify +import json, io, re import mock from twisted.trial import unittest from twisted.internet import reactor -from twisted.internet.defer import Deferred, gatherResults, inlineCallbacks -from .common import ServerBase -from .. import wormhole -from ..errors import (WrongPasswordError, WelcomeError, InternalError, - KeyFormatError) -from spake2 import SPAKE2_Symmetric -from ..timing import DebugTiming -from ..util import (bytes_to_dict, dict_to_bytes, - hexstr_to_bytes, bytes_to_hexstr) -from nacl.secret import SecretBox +from twisted.internet.defer import gatherResults, inlineCallbacks +from .common import ServerBase, poll_until, pause_one_tick +from .. import wormhole, _rendezvous +from ..errors import (WrongPasswordError, + KeyFormatError, WormholeClosed, LonelyError, + NoKeyError, OnlyOneCodeError) APPID = "appid" @@ -37,684 +32,20 @@ def response(w, **kwargs): class Welcome(unittest.TestCase): def test_tolerate_no_current_version(self): - w = wormhole._WelcomeHandler("relay_url", "current_cli_version", None) + w = wormhole._WelcomeHandler("relay_url") w.handle_welcome({}) def test_print_motd(self): - w = wormhole._WelcomeHandler("relay_url", "current_cli_version", None) - with mock.patch("sys.stderr") as stderr: - w.handle_welcome({"motd": "message of\nthe day"}) - self.assertEqual(stderr.method_calls, - [mock.call.write("Server (at relay_url) says:\n" - " message of\n the day"), - mock.call.write("\n")]) - # motd can be displayed multiple times - with mock.patch("sys.stderr") as stderr2: - w.handle_welcome({"motd": "second message"}) - self.assertEqual(stderr2.method_calls, - [mock.call.write("Server (at relay_url) says:\n" - " second message"), - mock.call.write("\n")]) - - def test_current_version(self): - w = wormhole._WelcomeHandler("relay_url", "2.0", None) - with mock.patch("sys.stderr") as stderr: - w.handle_welcome({"current_cli_version": "2.0"}) - self.assertEqual(stderr.method_calls, []) - - with mock.patch("sys.stderr") as stderr: - w.handle_welcome({"current_cli_version": "3.0"}) - exp1 = ("Warning: errors may occur unless both sides are" - " running the same version") - exp2 = ("Server claims 3.0 is current, but ours is 2.0") - self.assertEqual(stderr.method_calls, - [mock.call.write(exp1), - mock.call.write("\n"), - mock.call.write(exp2), - mock.call.write("\n"), - ]) - - # warning is only displayed once - with mock.patch("sys.stderr") as stderr: - w.handle_welcome({"current_cli_version": "3.0"}) - self.assertEqual(stderr.method_calls, []) - - def test_non_release_version(self): - w = wormhole._WelcomeHandler("relay_url", "2.0-dirty", None) - with mock.patch("sys.stderr") as stderr: - w.handle_welcome({"current_cli_version": "3.0"}) - self.assertEqual(stderr.method_calls, []) - - def test_signal_error(self): - se = mock.Mock() - w = wormhole._WelcomeHandler("relay_url", "2.0", se) - w.handle_welcome({}) - self.assertEqual(se.mock_calls, []) - - w.handle_welcome({"error": "oops"}) - self.assertEqual(len(se.mock_calls), 1) - self.assertEqual(len(se.mock_calls[0][1]), 2) # posargs - we = se.mock_calls[0][1][0] - self.assertIsInstance(we, WelcomeError) - self.assertEqual(we.args, ("oops",)) - mood = se.mock_calls[0][1][1] - self.assertEqual(mood, "unwelcome") - # alas WelcomeError instances don't compare against each other - #self.assertEqual(se.mock_calls, [mock.call(WelcomeError("oops"))]) - -class InputCode(unittest.TestCase): - def test_list(self): - send_command = mock.Mock() stderr = io.StringIO() - ic = wormhole._InputCode(None, "prompt", 2, send_command, - DebugTiming(), stderr) - d = ic._list() - self.assertNoResult(d) - self.assertEqual(send_command.mock_calls, [mock.call("list")]) - ic._response_handle_nameplates({"type": "nameplates", - "nameplates": [{"id": "123"}]}) - res = self.successResultOf(d) - self.assertEqual(res, ["123"]) - self.assertEqual(stderr.getvalue(), "") - - -class GetCode(unittest.TestCase): - def test_get(self): - send_command = mock.Mock() - gc = wormhole._GetCode(2, send_command, DebugTiming()) - d = gc.go() - self.assertNoResult(d) - self.assertEqual(send_command.mock_calls, [mock.call("allocate")]) - # TODO: nameplate attributes get added and checked here - gc._response_handle_allocated({"type": "allocated", - "nameplate": "123"}) - code = self.successResultOf(d) - self.assertIsInstance(code, type("")) - self.assert_(code.startswith("123-")) - pieces = code.split("-") - self.assertEqual(len(pieces), 3) # nameplate plus two words - self.assert_(re.search(r'^\d+-\w+-\w+$', code), code) - -class Basic(unittest.TestCase): - def tearDown(self): - # flush out any errorful Deferreds left dangling in cycles - gc.collect() - - def check_out(self, out, **kwargs): - # Assert that each kwarg is present in the 'out' dict. Ignore other - # keys ('msgid' in particular) - for key, value in kwargs.items(): - self.assertIn(key, out) - self.assertEqual(out[key], value, (out, key, value)) - - def check_outbound(self, ws, types): - out = ws.outbound() - self.assertEqual(len(out), len(types), (out, types)) - for i,t in enumerate(types): - self.assertEqual(out[i]["type"], t, (i,t,out)) - return out - - def make_pake(self, code, side, msg1): - sp2 = SPAKE2_Symmetric(wormhole.to_bytes(code), - idSymmetric=wormhole.to_bytes(APPID)) - msg2 = sp2.start() - key = sp2.finish(msg1) - return key, msg2 - - def test_create(self): - wormhole._Wormhole(APPID, "relay_url", reactor, None, None, None) - - def test_basic(self): - # We don't call w._start(), so this doesn't create a WebSocket - # connection. We provide a mock connection instead. If we wanted to - # exercise _connect, we'd mock out WSFactory. - # w._connect = lambda self: None - # w._event_connected(mock_ws) - # w._event_ws_opened() - # w._ws_dispatch_response(payload) - - timing = DebugTiming() - with mock.patch("wormhole.wormhole._WelcomeHandler") as wh_c: - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, - None) - wh = wh_c.return_value - self.assertEqual(w._ws_url, "relay_url") - self.assertTrue(w._flag_need_nameplate) - self.assertTrue(w._flag_need_to_build_msg1) - self.assertTrue(w._flag_need_to_send_PAKE) - - v = w.verify() - - w._drop_connection = mock.Mock() - ws = MockWebSocket() - w._event_connected(ws) - out = ws.outbound() - self.assertEqual(len(out), 0) - - w._event_ws_opened(None) - out = ws.outbound() - self.assertEqual(len(out), 1) - self.check_out(out[0], type="bind", appid=APPID, side=w._side) - self.assertIn("id", out[0]) - - # WelcomeHandler should get called upon 'welcome' response. Its full - # behavior is exercised in 'Welcome' above. - WELCOME = {"foo": "bar"} - response(w, type="welcome", welcome=WELCOME) - self.assertEqual(wh.mock_calls, [mock.call.handle_welcome(WELCOME)]) - - # because we're connected, setting the code also claims the mailbox - CODE = "123-foo-bar" - w.set_code(CODE) - self.assertFalse(w._flag_need_to_build_msg1) - out = ws.outbound() - self.assertEqual(len(out), 1) - self.check_out(out[0], type="claim", nameplate="123") - - # the server reveals the linked mailbox - response(w, type="claimed", mailbox="mb456") - - # that triggers event_learned_mailbox, which should send open() and - # PAKE - self.assertEqual(w._mailbox_state, wormhole.OPEN) - out = ws.outbound() - self.assertEqual(len(out), 2) - self.check_out(out[0], type="open", mailbox="mb456") - self.check_out(out[1], type="add", phase="pake") - self.assertNoResult(v) - - # server echoes back all "add" messages - response(w, type="message", phase="pake", body=out[1]["body"], - side=w._side) - self.assertNoResult(v) - - # extract our outbound PAKE message - body = bytes_to_dict(hexstr_to_bytes(out[1]["body"])) - msg1 = hexstr_to_bytes(body["pake_v1"]) - - # next we build the simulated peer's PAKE operation - side2 = w._side + "other" - key, msg2 = self.make_pake(CODE, side2, msg1) - payload = {"pake_v1": bytes_to_hexstr(msg2)} - body_hex = bytes_to_hexstr(dict_to_bytes(payload)) - response(w, type="message", phase="pake", body=body_hex, side=side2) - - # hearing the peer's PAKE (msg2) makes us release the nameplate, send - # the confirmation message, and sends any queued phase messages. It - # doesn't deliver the verifier because we're still waiting on the - # confirmation message. - self.assertFalse(w._flag_need_to_see_mailbox_used) - self.assertEqual(w._key, key) - out = ws.outbound() - self.assertEqual(len(out), 2, out) - self.check_out(out[0], type="release") - self.check_out(out[1], type="add", phase="version") - self.assertNoResult(v) - - # hearing a valid confirmation message doesn't throw an error - plaintext = json.dumps({}).encode("utf-8") - data_key = w._derive_phase_key(side2, "version") - confmsg = w._encrypt_data(data_key, plaintext) - version2_hex = hexlify(confmsg).decode("ascii") - response(w, type="message", phase="version", body=version2_hex, - side=side2) - - # and it releases the verifier - verifier = self.successResultOf(v) - self.assertEqual(verifier, - w.derive_key("wormhole:verifier", SecretBox.KEY_SIZE)) - - # an outbound message can now be sent immediately - w.send(b"phase0-outbound") - out = ws.outbound() - self.assertEqual(len(out), 1) - self.check_out(out[0], type="add", phase="0") - # decrypt+check the outbound message - p0_outbound = unhexlify(out[0]["body"].encode("ascii")) - msgkey0 = w._derive_phase_key(w._side, "0") - p0_plaintext = w._decrypt_data(msgkey0, p0_outbound) - self.assertEqual(p0_plaintext, b"phase0-outbound") - - # get() waits for the inbound message to arrive - md = w.get() - self.assertNoResult(md) - self.assertIn("0", w._receive_waiters) - self.assertNotIn("0", w._received_messages) - msgkey1 = w._derive_phase_key(side2, "0") - p0_inbound = w._encrypt_data(msgkey1, b"phase0-inbound") - p0_inbound_hex = hexlify(p0_inbound).decode("ascii") - response(w, type="message", phase="0", body=p0_inbound_hex, - side=side2) - p0_in = self.successResultOf(md) - self.assertEqual(p0_in, b"phase0-inbound") - self.assertNotIn("0", w._receive_waiters) - self.assertIn("0", w._received_messages) - - # receiving an inbound message will queue it until get() is called - msgkey2 = w._derive_phase_key(side2, "1") - p1_inbound = w._encrypt_data(msgkey2, b"phase1-inbound") - p1_inbound_hex = hexlify(p1_inbound).decode("ascii") - response(w, type="message", phase="1", body=p1_inbound_hex, - side=side2) - self.assertIn("1", w._received_messages) - self.assertNotIn("1", w._receive_waiters) - p1_in = self.successResultOf(w.get()) - self.assertEqual(p1_in, b"phase1-inbound") - self.assertIn("1", w._received_messages) - self.assertNotIn("1", w._receive_waiters) - - d = w.close() - self.assertNoResult(d) - out = ws.outbound() - self.assertEqual(len(out), 1) - self.check_out(out[0], type="close", mood="happy") - self.assertEqual(w._drop_connection.mock_calls, []) - - response(w, type="released") - self.assertEqual(w._drop_connection.mock_calls, []) - response(w, type="closed") - self.assertEqual(w._drop_connection.mock_calls, [mock.call()]) - w._ws_closed(True, None, None) - self.assertEqual(self.successResultOf(d), None) - - def test_close_wait_0(self): - # Close before the connection is established. The connection still - # gets established, but it is then torn down before sending anything. - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - - d = w.close() - self.assertNoResult(d) - - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - self.assertEqual(w._drop_connection.mock_calls, [mock.call()]) - self.assertNoResult(d) - - w._ws_closed(True, None, None) - self.successResultOf(d) - - def test_close_wait_1(self): - # close before even claiming the nameplate - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - - d = w.close() - self.check_outbound(ws, ["bind"]) - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, [mock.call()]) - self.assertNoResult(d) - - w._ws_closed(True, None, None) - self.successResultOf(d) - - def test_close_wait_2(self): - # Close after claiming the nameplate, but before opening the mailbox. - # The 'claimed' response arrives before we close. - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - CODE = "123-foo-bar" - w.set_code(CODE) - self.check_outbound(ws, ["bind", "claim"]) - - response(w, type="claimed", mailbox="mb123") - - d = w.close() - self.check_outbound(ws, ["open", "add", "release", "close"]) - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, []) - - response(w, type="released") - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, []) - - response(w, type="closed") - self.assertEqual(w._drop_connection.mock_calls, [mock.call()]) - self.assertNoResult(d) - - w._ws_closed(True, None, None) - self.successResultOf(d) - - def test_close_wait_3(self): - # close after claiming the nameplate, but before opening the mailbox - # The 'claimed' response arrives after we start to close. - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - CODE = "123-foo-bar" - w.set_code(CODE) - self.check_outbound(ws, ["bind", "claim"]) - - d = w.close() - response(w, type="claimed", mailbox="mb123") - self.check_outbound(ws, ["release"]) - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, []) - - response(w, type="released") - self.assertEqual(w._drop_connection.mock_calls, [mock.call()]) - self.assertNoResult(d) - - w._ws_closed(True, None, None) - self.successResultOf(d) - - def test_close_wait_4(self): - # close after both claiming the nameplate and opening the mailbox - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - CODE = "123-foo-bar" - w.set_code(CODE) - response(w, type="claimed", mailbox="mb456") - self.check_outbound(ws, ["bind", "claim", "open", "add"]) - - d = w.close() - self.check_outbound(ws, ["release", "close"]) - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, []) - - response(w, type="released") - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, []) - - response(w, type="closed") - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, [mock.call()]) - - w._ws_closed(True, None, None) - self.successResultOf(d) - - def test_close_wait_5(self): - # close after claiming the nameplate, opening the mailbox, then - # releasing the nameplate - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - CODE = "123-foo-bar" - w.set_code(CODE) - response(w, type="claimed", mailbox="mb456") - - w._key = b"" - msgkey = w._derive_phase_key("side2", "misc") - p1_inbound = w._encrypt_data(msgkey, b"") - p1_inbound_hex = hexlify(p1_inbound).decode("ascii") - response(w, type="message", phase="misc", side="side2", - body=p1_inbound_hex) - self.check_outbound(ws, ["bind", "claim", "open", "add", - "release"]) - - d = w.close() - self.check_outbound(ws, ["close"]) - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, []) - - response(w, type="released") - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, []) - - response(w, type="closed") - self.assertNoResult(d) - self.assertEqual(w._drop_connection.mock_calls, [mock.call()]) - - w._ws_closed(True, None, None) - self.successResultOf(d) - - def test_close_errbacks(self): - # make sure the Deferreds returned by verify() and get() are properly - # errbacked upon close - pass - - def test_get_code_mock(self): - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - ws = MockWebSocket() # TODO: mock w._ws_send_command instead - w._event_connected(ws) - w._event_ws_opened(None) - self.check_outbound(ws, ["bind"]) - - gc_c = mock.Mock() - gc = gc_c.return_value = mock.Mock() - gc_d = gc.go.return_value = Deferred() - with mock.patch("wormhole.wormhole._GetCode", gc_c): - d = w.get_code() - self.assertNoResult(d) - - gc_d.callback("123-foo-bar") - code = self.successResultOf(d) - self.assertEqual(code, "123-foo-bar") - - def test_get_code_real(self): - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - self.check_outbound(ws, ["bind"]) - - d = w.get_code() - - out = ws.outbound() - self.assertEqual(len(out), 1) - self.check_out(out[0], type="allocate") - # TODO: nameplate attributes go here - self.assertNoResult(d) - - response(w, type="allocated", nameplate="123") - code = self.successResultOf(d) - self.assertIsInstance(code, type("")) - self.assert_(code.startswith("123-")) - pieces = code.split("-") - self.assertEqual(len(pieces), 3) # nameplate plus two words - self.assert_(re.search(r'^\d+-\w+-\w+$', code), code) - - def _test_establish_key_hook(self, established, before): - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - - if before: - d = w.establish_key() - - if established is True: - w._key = b"key" - elif established is False: - w._key = None - else: - w._key = b"key" - w._error = WelcomeError() - - if not before: - d = w.establish_key() - else: - w._maybe_notify_key() - - if w._key is not None and established is True: - self.successResultOf(d) - elif established is False: - self.assertNot(d.called) - else: - self.failureResultOf(d) - - def test_establish_key_hook(self): - for established in (True, False, "error"): - for before in (True, False): - self._test_establish_key_hook(established, before) - - def test_establish_key_twice(self): - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - d = w.establish_key() - self.assertRaises(InternalError, w.establish_key) - del d - - # make sure verify() can be called both before and after the verifier is - # computed - - def _test_verifier(self, when, order, success): - assert when in ("early", "middle", "late") - assert order in ("key-then-version", "version-then-key") - assert isinstance(success, bool) - #print(when, order, success) - - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - w._ws_send_command = mock.Mock() - w._mailbox_state = wormhole.OPEN - side2 = "side2" - d = None - - if success: - w._key = b"key" - else: - w._key = b"wrongkey" - plaintext = json.dumps({}).encode("utf-8") - data_key = w._derive_phase_key(side2, "version") - confmsg = w._encrypt_data(data_key, plaintext) - w._key = None - - if when == "early": - d = w.verify() - self.assertNoResult(d) - - if order == "key-then-version": - w._key = b"key" - w._event_established_key() - else: - w._event_received_version(side2, confmsg) - - if when == "middle": - d = w.verify() - if d: - self.assertNoResult(d) # still waiting for other msg - - if order == "version-then-key": - w._key = b"key" - w._event_established_key() - else: - w._event_received_version(side2, confmsg) - - if when == "late": - d = w.verify() - if success: - self.successResultOf(d) - else: - self.assertFailure(d, wormhole.WrongPasswordError) - self.flushLoggedErrors(WrongPasswordError) - - def test_verifier(self): - for when in ("early", "middle", "late"): - for order in ("key-then-version", "version-then-key"): - for success in (False, True): - self._test_verifier(when, order, success) - - - def test_api_errors(self): - # doing things you're not supposed to do - pass - - def test_welcome_error(self): - # A welcome message could arrive at any time, with an [error] key - # that should make us halt. In practice, though, this gets sent as - # soon as the connection is established, which limits the possible - # states in which we might see it. - - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - self.check_outbound(ws, ["bind"]) - - d1 = w.get() - d2 = w.verify() - d3 = w.get_code() - # TODO (tricky): test w.input_code - - self.assertNoResult(d1) - self.assertNoResult(d2) - self.assertNoResult(d3) - - w._signal_error(WelcomeError("you are not actually welcome"), "pouty") - self.failureResultOf(d1, WelcomeError) - self.failureResultOf(d2, WelcomeError) - self.failureResultOf(d3, WelcomeError) - - # once the error is signalled, all API calls should fail - self.assertRaises(WelcomeError, w.send, "foo") - self.assertRaises(WelcomeError, - w.derive_key, "foo", SecretBox.KEY_SIZE) - self.failureResultOf(w.get(), WelcomeError) - self.failureResultOf(w.verify(), WelcomeError) - - def test_version_error(self): - # we should only receive the "version" message after we receive the - # PAKE message, by which point we should know the key. If the - # confirmation message doesn't decrypt, we signal an error. - timing = DebugTiming() - w = wormhole._Wormhole(APPID, "relay_url", reactor, None, timing, None) - w._drop_connection = mock.Mock() - ws = MockWebSocket() - w._event_connected(ws) - w._event_ws_opened(None) - w.set_code("123-foo-bar") - response(w, type="claimed", mailbox="mb456") - - d1 = w.get() - d2 = w.verify() - self.assertNoResult(d1) - self.assertNoResult(d2) - - out = ws.outbound() - # ["bind", "claim", "open", "add"] - self.assertEqual(len(out), 4) - self.assertEqual(out[3]["type"], "add") - - sp2 = SPAKE2_Symmetric(b"", idSymmetric=wormhole.to_bytes(APPID)) - msg2 = sp2.start() - payload = {"pake_v1": bytes_to_hexstr(msg2)} - body_hex = bytes_to_hexstr(dict_to_bytes(payload)) - response(w, type="message", phase="pake", body=body_hex, side="s2") - self.assertNoResult(d1) - self.assertNoResult(d2) # verify() waits for confirmation - - # sending a random version message will cause a confirmation error - confkey = w.derive_key("WRONG", SecretBox.KEY_SIZE) - nonce = os.urandom(wormhole.CONFMSG_NONCE_LENGTH) - badversion = wormhole.make_confmsg(confkey, nonce) - badversion_hex = hexlify(badversion).decode("ascii") - response(w, type="message", phase="version", body=badversion_hex, - side="s2") - - self.failureResultOf(d1, WrongPasswordError) - self.failureResultOf(d2, WrongPasswordError) - - # once the error is signalled, all API calls should fail - self.assertRaises(WrongPasswordError, w.send, "foo") - self.assertRaises(WrongPasswordError, - w.derive_key, "foo", SecretBox.KEY_SIZE) - self.failureResultOf(w.get(), WrongPasswordError) - self.failureResultOf(w.verify(), WrongPasswordError) - + w = wormhole._WelcomeHandler("relay_url", stderr=stderr) + w.handle_welcome({"motd": "message of\nthe day"}) + self.assertEqual(stderr.getvalue(), + "Server (at relay_url) says:\n message of\n the day\n") + # motd can be displayed multiple times + w.handle_welcome({"motd": "second message"}) + self.assertEqual(stderr.getvalue(), + ("Server (at relay_url) says:\n message of\n the day\n" + "Server (at relay_url) says:\n second message\n")) # event orderings to exercise: # @@ -727,40 +58,127 @@ class Basic(unittest.TestCase): # * set_code, then connected # * connected, receive_pake, send_phase, set_code +class Delegate: + def __init__(self): + self.code = None + self.verifier = None + self.messages = [] + self.closed = None + def wormhole_got_code(self, code): + self.code = code + def wormhole_got_verifier(self, verifier): + self.verifier = verifier + def wormhole_receive(self, data): + self.messages.append(data) + def wormhole_closed(self, result): + self.closed = result + +class Delegated(ServerBase, unittest.TestCase): + + def test_delegated(self): + dg = Delegate() + w = wormhole.create(APPID, self.relayurl, reactor, delegate=dg) + w.close() + class Wormholes(ServerBase, unittest.TestCase): # integration test, with a real server def doBoth(self, d1, d2): return gatherResults([d1, d2], True) + @inlineCallbacks + def test_allocate_default(self): + w1 = wormhole.create(APPID, self.relayurl, reactor) + w1.allocate_code() + code = yield w1.when_code() + mo = re.search(r"^\d+-\w+-\w+$", code) + self.assert_(mo, code) + # w.close() fails because we closed before connecting + yield self.assertFailure(w1.close(), LonelyError) + + @inlineCallbacks + def test_allocate_more_words(self): + w1 = wormhole.create(APPID, self.relayurl, reactor) + w1.allocate_code(3) + code = yield w1.when_code() + mo = re.search(r"^\d+-\w+-\w+-\w+$", code) + self.assert_(mo, code) + yield self.assertFailure(w1.close(), LonelyError) + @inlineCallbacks def test_basic(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - code = yield w1.get_code() + w1 = wormhole.create(APPID, self.relayurl, reactor) + #w1.debug_set_trace("W1") + w2 = wormhole.create(APPID, self.relayurl, reactor) + #w2.debug_set_trace(" W2") + w1.allocate_code() + code = yield w1.when_code() w2.set_code(code) + + yield w1.when_key() + yield w2.when_key() + + verifier1 = yield w1.when_verified() + verifier2 = yield w2.when_verified() + self.assertEqual(verifier1, verifier2) + + self.successResultOf(w1.when_key()) + self.successResultOf(w2.when_key()) + + version1 = yield w1.when_version() + version2 = yield w2.when_version() + # app-versions are exercised properly in test_versions, this just + # tests the defaults + self.assertEqual(version1, {}) + self.assertEqual(version2, {}) + w1.send(b"data1") w2.send(b"data2") - dataX = yield w1.get() - dataY = yield w2.get() + dataX = yield w1.when_received() + dataY = yield w2.when_received() self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") - yield w1.close() - yield w2.close() + + version1_again = yield w1.when_version() + self.assertEqual(version1, version1_again) + + c1 = yield w1.close() + self.assertEqual(c1, "happy") + c2 = yield w2.close() + self.assertEqual(c2, "happy") + + @inlineCallbacks + def test_when_code_early(self): + w1 = wormhole.create(APPID, self.relayurl, reactor) + d = w1.when_code() + w1.set_code("1-abc") + code = self.successResultOf(d) + self.assertEqual(code, "1-abc") + yield self.assertFailure(w1.close(), LonelyError) + + @inlineCallbacks + def test_when_code_late(self): + w1 = wormhole.create(APPID, self.relayurl, reactor) + w1.set_code("1-abc") + d = w1.when_code() + code = self.successResultOf(d) + self.assertEqual(code, "1-abc") + yield self.assertFailure(w1.close(), LonelyError) @inlineCallbacks def test_same_message(self): # the two sides use random nonces for their messages, so it's ok for # both to try and send the same body: they'll result in distinct # encrypted messages - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - code = yield w1.get_code() + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) + w1.allocate_code() + code = yield w1.when_code() w2.set_code(code) w1.send(b"data") w2.send(b"data") - dataX = yield w1.get() - dataY = yield w2.get() + dataX = yield w1.when_received() + dataY = yield w2.when_received() self.assertEqual(dataX, b"data") self.assertEqual(dataY, b"data") yield w1.close() @@ -768,14 +186,15 @@ class Wormholes(ServerBase, unittest.TestCase): @inlineCallbacks def test_interleaved(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - code = yield w1.get_code() + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) + w1.allocate_code() + code = yield w1.when_code() w2.set_code(code) w1.send(b"data1") - dataY = yield w2.get() + dataY = yield w2.when_received() self.assertEqual(dataY, b"data1") - d = w1.get() + d = w1.when_received() w2.send(b"data2") dataX = yield d self.assertEqual(dataX, b"data2") @@ -784,22 +203,23 @@ class Wormholes(ServerBase, unittest.TestCase): @inlineCallbacks def test_unidirectional(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - code = yield w1.get_code() + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) + w1.allocate_code() + code = yield w1.when_code() w2.set_code(code) w1.send(b"data1") - dataY = yield w2.get() + dataY = yield w2.when_received() self.assertEqual(dataY, b"data1") yield w1.close() yield w2.close() @inlineCallbacks def test_early(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) + w1 = wormhole.create(APPID, self.relayurl, reactor) w1.send(b"data1") - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - d = w2.get() + w2 = wormhole.create(APPID, self.relayurl, reactor) + d = w2.when_received() w1.set_code("123-abc-def") w2.set_code("123-abc-def") dataY = yield d @@ -809,12 +229,34 @@ class Wormholes(ServerBase, unittest.TestCase): @inlineCallbacks def test_fixed_code(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) w1.set_code("123-purple-elephant") w2.set_code("123-purple-elephant") w1.send(b"data1"), w2.send(b"data2") - dl = yield self.doBoth(w1.get(), w2.get()) + dl = yield self.doBoth(w1.when_received(), w2.when_received()) + (dataX, dataY) = dl + self.assertEqual(dataX, b"data2") + self.assertEqual(dataY, b"data1") + yield w1.close() + yield w2.close() + + @inlineCallbacks + def test_input_code(self): + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) + w1.set_code("123-purple-elephant") + h = w2.input_code() + h.choose_nameplate("123") + # Pause to allow some messages to get delivered. Specifically we want + # to wait until w2 claims the nameplate, opens the mailbox, and + # receives the PAKE message, to exercise the PAKE-before-CODE path in + # Key. + yield poll_until(lambda: w2._boss._K._debug_pake_stashed) + h.choose_words("purple-elephant") + + w1.send(b"data1"), w2.send(b"data2") + dl = yield self.doBoth(w1.when_received(), w2.when_received()) (dataX, dataY) = dl self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") @@ -824,29 +266,56 @@ class Wormholes(ServerBase, unittest.TestCase): @inlineCallbacks def test_multiple_messages(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) w1.set_code("123-purple-elephant") w2.set_code("123-purple-elephant") w1.send(b"data1"), w2.send(b"data2") w1.send(b"data3"), w2.send(b"data4") - dl = yield self.doBoth(w1.get(), w2.get()) + dl = yield self.doBoth(w1.when_received(), w2.when_received()) (dataX, dataY) = dl self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") - dl = yield self.doBoth(w1.get(), w2.get()) + dl = yield self.doBoth(w1.when_received(), w2.when_received()) (dataX, dataY) = dl self.assertEqual(dataX, b"data4") self.assertEqual(dataY, b"data3") yield w1.close() yield w2.close() + + @inlineCallbacks + def test_closed(self): + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) + w1.set_code("123-foo") + w2.set_code("123-foo") + + # let it connect and become HAPPY + yield w1.when_version() + yield w2.when_version() + + yield w1.close() + yield w2.close() + + # once closed, all Deferred-yielding API calls get an immediate error + f = self.failureResultOf(w1.when_code(), WormholeClosed) + self.assertEqual(f.value.args[0], "happy") + self.failureResultOf(w1.when_key(), WormholeClosed) + self.failureResultOf(w1.when_verified(), WormholeClosed) + self.failureResultOf(w1.when_version(), WormholeClosed) + self.failureResultOf(w1.when_received(), WormholeClosed) + + @inlineCallbacks def test_wrong_password(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - code = yield w1.get_code() + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) + w1.allocate_code() + code = yield w1.when_code() w2.set_code(code+"not") + code2 = yield w2.when_code() + self.assertNotEqual(code, code2) # That's enough to allow both sides to discover the mismatch, but # only after the confirmation message gets through. API calls that # don't wait will appear to work until the mismatched confirmation @@ -854,63 +323,114 @@ class Wormholes(ServerBase, unittest.TestCase): w1.send(b"should still work") w2.send(b"should still work") - # API calls that wait (i.e. get) will errback - yield self.assertFailure(w2.get(), WrongPasswordError) - yield self.assertFailure(w1.get(), WrongPasswordError) + key2 = yield w2.when_key() # should work + # w2 has just received w1.PAKE, and is about to send w2.VERSION + key1 = yield w1.when_key() # should work + # w1 has just received w2.PAKE, and is about to send w1.VERSION, and + # then will receive w2.VERSION. When it sees w2.VERSION, it will + # learn about the WrongPasswordError. + self.assertNotEqual(key1, key2) - yield w1.close() - yield w2.close() - self.flushLoggedErrors(WrongPasswordError) + # API calls that wait (i.e. get) will errback. We collect all these + # Deferreds early to exercise the wait-then-fail path + d1_verified = w1.when_verified() + d1_version = w1.when_version() + d1_received = w1.when_received() + d2_verified = w2.when_verified() + d2_version = w2.when_version() + d2_received = w2.when_received() + + # wait for each side to notice the failure + yield self.assertFailure(w1.when_verified(), WrongPasswordError) + yield self.assertFailure(w2.when_verified(), WrongPasswordError) + # and then wait for the rest of the loops to fire. if we had+used + # eventual-send, this wouldn't be a problem + yield pause_one_tick() + + # now all the rest should have fired already + self.failureResultOf(d1_verified, WrongPasswordError) + self.failureResultOf(d1_version, WrongPasswordError) + self.failureResultOf(d1_received, WrongPasswordError) + self.failureResultOf(d2_verified, WrongPasswordError) + self.failureResultOf(d2_version, WrongPasswordError) + self.failureResultOf(d2_received, WrongPasswordError) + + # and at this point, with the failure safely noticed by both sides, + # new when_key() calls should signal the failure, even before we + # close + + # any new calls in the error state should immediately fail + self.failureResultOf(w1.when_key(), WrongPasswordError) + self.failureResultOf(w1.when_verified(), WrongPasswordError) + self.failureResultOf(w1.when_version(), WrongPasswordError) + self.failureResultOf(w1.when_received(), WrongPasswordError) + self.failureResultOf(w2.when_key(), WrongPasswordError) + self.failureResultOf(w2.when_verified(), WrongPasswordError) + self.failureResultOf(w2.when_version(), WrongPasswordError) + self.failureResultOf(w2.when_received(), WrongPasswordError) + + yield self.assertFailure(w1.close(), WrongPasswordError) + yield self.assertFailure(w2.close(), WrongPasswordError) + + # API calls should still get the error, not WormholeClosed + self.failureResultOf(w1.when_key(), WrongPasswordError) + self.failureResultOf(w1.when_verified(), WrongPasswordError) + self.failureResultOf(w1.when_version(), WrongPasswordError) + self.failureResultOf(w1.when_received(), WrongPasswordError) + self.failureResultOf(w2.when_key(), WrongPasswordError) + self.failureResultOf(w2.when_verified(), WrongPasswordError) + self.failureResultOf(w2.when_version(), WrongPasswordError) + self.failureResultOf(w2.when_received(), WrongPasswordError) @inlineCallbacks def test_wrong_password_with_spaces(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - code = yield w1.get_code() - code_no_dashes = code.replace('-', ' ') - + w = wormhole.create(APPID, self.relayurl, reactor) + badcode = "4 oops spaces" with self.assertRaises(KeyFormatError) as ex: - w2.set_code(code_no_dashes) - - expected_msg = "code (%s) contains spaces." % (code_no_dashes,) + w.set_code(badcode) + expected_msg = "code (%s) contains spaces." % (badcode,) self.assertEqual(expected_msg, str(ex.exception)) - - yield w1.close() - yield w2.close() - self.flushLoggedErrors(KeyFormatError) + yield self.assertFailure(w.close(), LonelyError) @inlineCallbacks def test_verifier(self): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - code = yield w1.get_code() + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) + w1.allocate_code() + code = yield w1.when_code() w2.set_code(code) - v1 = yield w1.verify() - v2 = yield w2.verify() + v1 = yield w1.when_verified() # early + v2 = yield w2.when_verified() self.failUnlessEqual(type(v1), type(b"")) self.failUnlessEqual(v1, v2) w1.send(b"data1") w2.send(b"data2") - dataX = yield w1.get() - dataY = yield w2.get() + dataX = yield w1.when_received() + dataY = yield w2.when_received() self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") + + # calling when_verified() this late should fire right away + v1_late = self.successResultOf(w2.when_verified()) + self.assertEqual(v1_late, v1) + yield w1.close() yield w2.close() @inlineCallbacks def test_versions(self): # there's no API for this yet, but make sure the internals work - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w1._my_versions = {"w1": 123} - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2._my_versions = {"w2": 456} - code = yield w1.get_code() + w1 = wormhole.create(APPID, self.relayurl, reactor, + versions={"w1": 123}) + w2 = wormhole.create(APPID, self.relayurl, reactor, + versions={"w2": 456}) + w1.allocate_code() + code = yield w1.when_code() w2.set_code(code) - yield w1.verify() - self.assertEqual(w1._their_versions, {"w2": 456}) - yield w2.verify() - self.assertEqual(w2._their_versions, {"w1": 123}) + w1_versions = yield w2.when_version() + self.assertEqual(w1_versions, {"w1": 123}) + w2_versions = yield w1.when_version() + self.assertEqual(w2_versions, {"w2": 456}) yield w1.close() yield w2.close() @@ -923,50 +443,53 @@ class Wormholes(ServerBase, unittest.TestCase): # incoming PAKE message was received, which would cause # SPAKE2.finish() to be called a second time, which throws an error # (which, being somewhat unexpected, caused a hang rather than a - # clear exception). - with mock.patch("wormhole.wormhole._Wormhole", MessageDoublingReceiver): - w1 = wormhole.wormhole(APPID, self.relayurl, reactor) - w2 = wormhole.wormhole(APPID, self.relayurl, reactor) + # clear exception). The Mailbox object is responsible for + # deduplication, so we must patch the RendezvousConnector to simulate + # duplicated messages. + with mock.patch("wormhole._boss.RendezvousConnector", MessageDoubler): + w1 = wormhole.create(APPID, self.relayurl, reactor) + w2 = wormhole.create(APPID, self.relayurl, reactor) w1.set_code("123-purple-elephant") w2.set_code("123-purple-elephant") w1.send(b"data1"), w2.send(b"data2") - dl = yield self.doBoth(w1.get(), w2.get()) + dl = yield self.doBoth(w1.when_received(), w2.when_received()) (dataX, dataY) = dl self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") yield w1.close() yield w2.close() -class MessageDoublingReceiver(wormhole._Wormhole): +class MessageDoubler(_rendezvous.RendezvousConnector): # we could double messages on the sending side, but a future server will # strip those duplicates, so to really exercise the receiver, we must # double them on the inbound side instead #def _msg_send(self, phase, body): # wormhole._Wormhole._msg_send(self, phase, body) # self._ws_send_command("add", phase=phase, body=bytes_to_hexstr(body)) - def _event_received_peer_message(self, side, phase, body): - wormhole._Wormhole._event_received_peer_message(self, side, phase, body) - wormhole._Wormhole._event_received_peer_message(self, side, phase, body) + def _response_handle_message(self, msg): + _rendezvous.RendezvousConnector._response_handle_message(self, msg) + _rendezvous.RendezvousConnector._response_handle_message(self, msg) class Errors(ServerBase, unittest.TestCase): @inlineCallbacks - def test_codes_1(self): - w = wormhole.wormhole(APPID, self.relayurl, reactor) + def test_derive_key_early(self): + w = wormhole.create(APPID, self.relayurl, reactor) # definitely too early - self.assertRaises(InternalError, w.derive_key, "purpose", 12) - - w.set_code("123-purple-elephant") - # code can only be set once - self.assertRaises(InternalError, w.set_code, "123-nope") - yield self.assertFailure(w.get_code(), InternalError) - yield self.assertFailure(w.input_code(), InternalError) - yield w.close() + self.assertRaises(NoKeyError, w.derive_key, "purpose", 12) + yield self.assertFailure(w.close(), LonelyError) @inlineCallbacks - def test_codes_2(self): - w = wormhole.wormhole(APPID, self.relayurl, reactor) - yield w.get_code() - self.assertRaises(InternalError, w.set_code, "123-nope") - yield self.assertFailure(w.get_code(), InternalError) - yield self.assertFailure(w.input_code(), InternalError) - yield w.close() + def test_multiple_set_code(self): + w = wormhole.create(APPID, self.relayurl, reactor) + w.set_code("123-purple-elephant") + # code can only be set once + self.assertRaises(OnlyOneCodeError, w.set_code, "123-nope") + yield self.assertFailure(w.close(), LonelyError) + + @inlineCallbacks + def test_allocate_and_set_code(self): + w = wormhole.create(APPID, self.relayurl, reactor) + w.allocate_code() + yield w.when_code() + self.assertRaises(OnlyOneCodeError, w.set_code, "123-nope") + yield self.assertFailure(w.close(), LonelyError) diff --git a/src/wormhole/timing.py b/src/wormhole/timing.py index 0ecf1bc..8cb18e5 100644 --- a/src/wormhole/timing.py +++ b/src/wormhole/timing.py @@ -1,5 +1,7 @@ from __future__ import print_function, absolute_import, unicode_literals import json, time +from zope.interface import implementer +from ._interfaces import ITiming class Event: def __init__(self, name, when, **details): @@ -33,6 +35,7 @@ class Event: else: self.finish() +@implementer(ITiming) class DebugTiming: def __init__(self): self._events = [] diff --git a/src/wormhole/tor_manager.py b/src/wormhole/tor_manager.py index 85834bc..b74b207 100644 --- a/src/wormhole/tor_manager.py +++ b/src/wormhole/tor_manager.py @@ -1,6 +1,7 @@ from __future__ import print_function, unicode_literals import sys, re import six +from zope.interface import implementer from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.error import ConnectError from twisted.internet.endpoints import clientFromString @@ -14,9 +15,12 @@ except ImportError: TorClientEndpoint = None DEFAULT_VALUE = "DEFAULT_VALUE" import ipaddress +from . import _interfaces from .timing import DebugTiming from .transit import allocate_tcp_port + +@implementer(_interfaces.ITorManager) class TorManager: def __init__(self, reactor, launch_tor=False, tor_control_port=None, timing=None, stderr=sys.stderr): diff --git a/src/wormhole/wormhole.py b/src/wormhole/wormhole.py index 7c6de48..a7a8606 100644 --- a/src/wormhole/wormhole.py +++ b/src/wormhole/wormhole.py @@ -1,949 +1,302 @@ from __future__ import print_function, absolute_import, unicode_literals -import os, sys, re -from six.moves.urllib_parse import urlparse -from twisted.internet import defer, endpoints, error -from twisted.internet.threads import deferToThread, blockingCallFromThread -from twisted.internet.defer import inlineCallbacks, returnValue -from twisted.python import log, failure -from autobahn.twisted import websocket -from nacl.secret import SecretBox -from nacl.exceptions import CryptoError -from nacl import utils -from spake2 import SPAKE2_Symmetric -from hashlib import sha256 -from . import __version__ -from . import codes -#from .errors import ServerError, Timeout -from .errors import (WrongPasswordError, InternalError, WelcomeError, - WormholeClosedError, KeyFormatError) +import os, sys +from attr import attrs, attrib +from zope.interface import implementer +from twisted.python import failure +from twisted.internet import defer +from ._interfaces import IWormhole +from .util import bytes_to_hexstr from .timing import DebugTiming -from .util import (to_bytes, bytes_to_hexstr, hexstr_to_bytes, - dict_to_bytes, bytes_to_dict) -from hkdf import Hkdf +from .journal import ImmediateJournal +from ._boss import Boss +from ._key import derive_key +from .errors import NoKeyError, WormholeClosed +from .util import to_bytes -def HKDF(skm, outlen, salt=None, CTXinfo=b""): - return Hkdf(salt, skm).expand(CTXinfo, outlen) +# We can provide different APIs to different apps: +# * Deferreds +# w.when_code().addCallback(print_code) +# w.send(data) +# w.when_received().addCallback(got_data) +# w.close().addCallback(closed) -CONFMSG_NONCE_LENGTH = 128//8 -CONFMSG_MAC_LENGTH = 256//8 -def make_confmsg(confkey, nonce): - return nonce+HKDF(confkey, CONFMSG_MAC_LENGTH, nonce) - - -# We send the following messages through the relay server to the far side (by -# sending "add" commands to the server, and getting "message" responses): +# * delegate callbacks (better for journaled environments) +# w = wormhole(delegate=app) +# w.send(data) +# app.wormhole_got_code(code) +# app.wormhole_got_verifier(verifier) +# app.wormhole_got_version(versions) +# app.wormhole_receive(data) +# w.close() +# app.wormhole_closed() # -# phase=setup: -# * unauthenticated version strings (but why?) -# * early warmup for connection hints ("I can do tor, spin up HS") -# * wordlist l10n identifier -# phase=pake: just the SPAKE2 'start' message (binary) -# phase=version: version data, key verification (HKDF(key, nonce)+nonce) -# phase=1,2,3,..: application messages - -class WSClient(websocket.WebSocketClientProtocol): - def onOpen(self): - self.wormhole_open = True - self.factory.d.callback(self) - - def onMessage(self, payload, isBinary): - assert not isBinary - self.wormhole._ws_dispatch_response(payload) - - def onClose(self, wasClean, code, reason): - if self.wormhole_open: - self.wormhole._ws_closed(wasClean, code, reason) - else: - # we closed before establishing a connection (onConnect) or - # finishing WebSocket negotiation (onOpen): errback - self.factory.d.errback(error.ConnectError(reason)) - -class WSFactory(websocket.WebSocketClientFactory): - protocol = WSClient - def buildProtocol(self, addr): - proto = websocket.WebSocketClientFactory.buildProtocol(self, addr) - proto.wormhole = self.wormhole - proto.wormhole_open = False - return proto - - -class _GetCode: - def __init__(self, code_length, send_command, timing): - self._code_length = code_length - self._send_command = send_command - self._timing = timing - self._allocated_d = defer.Deferred() - - @inlineCallbacks - def go(self): - with self._timing.add("allocate"): - self._send_command("allocate") - nameplate_id = yield self._allocated_d - code = codes.make_code(nameplate_id, self._code_length) - assert isinstance(code, type("")), type(code) - returnValue(code) - - def _response_handle_allocated(self, msg): - nid = msg["nameplate"] - assert isinstance(nid, type("")), type(nid) - self._allocated_d.callback(nid) - -class _InputCode: - def __init__(self, reactor, prompt, code_length, send_command, timing, - stderr): - self._reactor = reactor - self._prompt = prompt - self._code_length = code_length - self._send_command = send_command - self._timing = timing - self._stderr = stderr - - @inlineCallbacks - def _list(self): - self._lister_d = defer.Deferred() - self._send_command("list") - nameplates = yield self._lister_d - self._lister_d = None - returnValue(nameplates) - - def _list_blocking(self): - return blockingCallFromThread(self._reactor, self._list) - - @inlineCallbacks - def go(self): - # fetch the list of nameplates ahead of time, to give us a chance to - # discover the welcome message (and warn the user about an obsolete - # client) - # - # TODO: send the request early, show the prompt right away, hide the - # latency in the user's indecision and slow typing. If we're lucky - # the answer will come back before they hit TAB. - - initial_nameplate_ids = yield self._list() - with self._timing.add("input code", waiting="user"): - t = self._reactor.addSystemEventTrigger("before", "shutdown", - self._warn_readline) - res = yield deferToThread(codes.input_code_with_completion, - self._prompt, - initial_nameplate_ids, - self._list_blocking, - self._code_length) - (code, used_completion) = res - self._reactor.removeSystemEventTrigger(t) - if not used_completion: - self._remind_about_tab() - returnValue(code) - - def _response_handle_nameplates(self, msg): - nameplates = msg["nameplates"] - assert isinstance(nameplates, list), type(nameplates) - nids = [] - for n in nameplates: - assert isinstance(n, dict), type(n) - nameplate_id = n["id"] - assert isinstance(nameplate_id, type("")), type(nameplate_id) - nids.append(nameplate_id) - self._lister_d.callback(nids) - - def _warn_readline(self): - # When our process receives a SIGINT, Twisted's SIGINT handler will - # stop the reactor and wait for all threads to terminate before the - # process exits. However, if we were waiting for - # input_code_with_completion() when SIGINT happened, the readline - # thread will be blocked waiting for something on stdin. Trick the - # user into satisfying the blocking read so we can exit. - print("\nCommand interrupted: please press Return to quit", - file=sys.stderr) - - # Other potential approaches to this problem: - # * hard-terminate our process with os._exit(1), but make sure the - # tty gets reset to a normal mode ("cooked"?) first, so that the - # next shell command the user types is echoed correctly - # * track down the thread (t.p.threadable.getThreadID from inside the - # thread), get a cffi binding to pthread_kill, deliver SIGINT to it - # * allocate a pty pair (pty.openpty), replace sys.stdin with the - # slave, build a pty bridge that copies bytes (and other PTY - # things) from the real stdin to the master, then close the slave - # at shutdown, so readline sees EOF - # * write tab-completion and basic editing (TTY raw mode, - # backspace-is-erase) without readline, probably with curses or - # twisted.conch.insults - # * write a separate program to get codes (maybe just "wormhole - # --internal-get-code"), run it as a subprocess, let it inherit - # stdin/stdout, send it SIGINT when we receive SIGINT ourselves. It - # needs an RPC mechanism (over some extra file descriptors) to ask - # us to fetch the current nameplate_id list. - # - # Note that hard-terminating our process with os.kill(os.getpid(), - # signal.SIGKILL), or SIGTERM, doesn't seem to work: the thread - # doesn't see the signal, and we must still wait for stdin to make - # readline finish. - - def _remind_about_tab(self): - print(" (note: you can use to complete words)", file=self._stderr) +# * potential delegate options +# wormhole(delegate=app, delegate_prefix="wormhole_", +# delegate_args=(args, kwargs)) class _WelcomeHandler: - def __init__(self, url, current_version, signal_error): - self._ws_url = url - self._version_warning_displayed = False - self._current_version = current_version - self._signal_error = signal_error + def __init__(self, url, stderr=sys.stderr): + self.relay_url = url + self.stderr = stderr def handle_welcome(self, welcome): if "motd" in welcome: motd_lines = welcome["motd"].splitlines() motd_formatted = "\n ".join(motd_lines) print("Server (at %s) says:\n %s" % - (self._ws_url, motd_formatted), file=sys.stderr) + (self.relay_url, motd_formatted), file=self.stderr) - # Only warn if we're running a release version (e.g. 0.0.6, not - # 0.0.6-DISTANCE-gHASH). Only warn once. - if ("current_cli_version" in welcome - and "-" not in self._current_version - and not self._version_warning_displayed - and welcome["current_cli_version"] != self._current_version): - print("Warning: errors may occur unless both sides are running the same version", file=sys.stderr) - print("Server claims %s is current, but ours is %s" - % (welcome["current_cli_version"], self._current_version), - file=sys.stderr) - self._version_warning_displayed = True +@attrs +@implementer(IWormhole) +class _DelegatedWormhole(object): + _delegate = attrib() - if "error" in welcome: - return self._signal_error(WelcomeError(welcome["error"]), - "unwelcome") - -# states for nameplates, mailboxes, and the websocket connection -(CLOSED, OPENING, OPEN, CLOSING) = ("closed", "opening", "open", "closing") - - -class _Wormhole: - DEBUG = False - - def __init__(self, appid, relay_url, reactor, tor_manager, timing, stderr): - self._appid = appid - self._ws_url = relay_url - self._reactor = reactor - self._tor_manager = tor_manager - self._timing = timing - self._stderr = stderr - - self._welcomer = _WelcomeHandler(self._ws_url, __version__, - self._signal_error) - self._side = bytes_to_hexstr(os.urandom(5)) - self._connection_state = CLOSED - self._connection_waiters = [] - self._ws_t = None - self._started_get_code = False - self._get_code = None - self._started_input_code = False - self._input_code_waiter = None - self._code = None - self._nameplate_id = None - self._nameplate_state = CLOSED - self._mailbox_id = None - self._mailbox_state = CLOSED - self._flag_need_nameplate = True - self._flag_need_to_see_mailbox_used = True - self._flag_need_to_build_msg1 = True - self._flag_need_to_send_PAKE = True - self._establish_key_called = False - self._key_waiter = None + def __attrs_post_init__(self): self._key = None - self._version_message = None - self._version_checked = False - self._get_verifier_called = False - self._verifier = None # bytes - self._verify_result = None # bytes or a Failure - self._verifier_waiter = None + def _set_boss(self, boss): + self._boss = boss - self._my_versions = {} # sent - self._their_versions = {} # received + # from above - self._close_called = False # the close() API has been called - self._closing = False # we've started shutdown - self._disconnect_waiter = defer.Deferred() - self._error = None - - self._next_send_phase = 0 - # send() queues plaintext here, waiting for a connection and the key - self._plaintext_to_send = [] # (phase, plaintext) - self._sent_phases = set() # to detect double-send - - self._next_receive_phase = 0 - self._receive_waiters = {} # phase -> Deferred - self._received_messages = {} # phase -> plaintext - - # API METHODS for applications to call - - # You must use at least one of these entry points, to establish the - # wormhole code. Other APIs will stall or be queued until we have one. - - # entry point 1: generate a new code. returns a Deferred - def get_code(self, code_length=2): # XX rename to allocate_code()? create_? - return self._API_get_code(code_length) - - # entry point 2: interactively type in a code, with completion. returns - # Deferred - def input_code(self, prompt="Enter wormhole code: ", code_length=2): - return self._API_input_code(prompt, code_length) - - # entry point 3: paste in a fully-formed code. No return value. + def allocate_code(self, code_length=2): + self._boss.allocate_code(code_length) + def input_code(self, stdio): + self._boss.input_code(stdio) def set_code(self, code): - self._API_set_code(code) + self._boss.set_code(code) - # todo: restore-saved-state entry points + ## def serialize(self): + ## s = {"serialized_wormhole_version": 1, + ## "boss": self._boss.serialize(), + ## } + ## return s - def establish_key(self): - """ - returns a Deferred that fires when we've established the shared key. - When successful, the Deferred fires with a simple `True`, otherwise - it fails. - """ - return self._API_establish_key() - - def verify(self): - """Returns a Deferred that fires when we've heard back from the other - side, and have confirmed that they used the right wormhole code. When - successful, the Deferred fires with a "verifier" (a bytestring) which - can be compared out-of-band before making additional API calls. If - they used the wrong wormhole code, the Deferred errbacks with - WrongPasswordError. - """ - return self._API_verify() - - def send(self, outbound_data): - return self._API_send(outbound_data) - - def get(self): - return self._API_get() + def send(self, plaintext): + self._boss.send(plaintext) def derive_key(self, purpose, length): """Derive a new key from the established wormhole channel for some other purpose. This is a deterministic randomized function of the session key and the 'purpose' string (unicode/py3-string). This - cannot be called until verify() or get() has fired. + cannot be called until when_verifier() has fired, nor after close() + was called. """ - return self._API_derive_key(purpose, length) - - def close(self, res=None): - """Collapse the wormhole, freeing up server resources and flushing - all pending messages. Returns a Deferred that fires when everything - is done. It fires with any argument close() was given, to enable use - as a d.addBoth() handler: - - w = wormhole(...) - d = w.get() - .. - d.addBoth(w.close) - return d - - Another reasonable approach is to use inlineCallbacks: - - @inlineCallbacks - def pair(self, code): - w = wormhole(...) - try: - them = yield w.get() - finally: - yield w.close() - """ - return self._API_close(res) - - # INTERNAL METHODS beyond here - - def _start(self): - d = self._connect() # causes stuff to happen - d.addErrback(log.err) - return d # fires when connection is established, if you care - - - - def _make_endpoint(self, hostname, port): - if self._tor_manager: - return self._tor_manager.get_endpoint_for(hostname, port) - # note: HostnameEndpoints have a default 30s timeout - return endpoints.HostnameEndpoint(self._reactor, hostname, port) - - def _connect(self): - # TODO: if we lose the connection, make a new one, re-establish the - # state - assert self._side - self._connection_state = OPENING - self._ws_t = self._timing.add("open websocket") - p = urlparse(self._ws_url) - f = WSFactory(self._ws_url) - f.setProtocolOptions(autoPingInterval=60, autoPingTimeout=600) - f.wormhole = self - f.d = defer.Deferred() - # TODO: if hostname="localhost", I get three factories starting - # and stopping (maybe 127.0.0.1, ::1, and something else?), and - # an error in the factory is masked. - ep = self._make_endpoint(p.hostname, p.port or 80) - # .connect errbacks if the TCP connection fails - d = ep.connect(f) - d.addCallback(self._event_connected) - # f.d is errbacked if WebSocket negotiation fails, and the WebSocket - # drops any data sent before onOpen() fires, so we must wait for it - d.addCallback(lambda _: f.d) - d.addCallback(self._event_ws_opened) - return d - - def _event_connected(self, ws): - self._ws = ws - if self._ws_t: - self._ws_t.finish() - - def _event_ws_opened(self, _): - self._connection_state = OPEN - if self._closing: - return self._maybe_finished_closing() - self._ws_send_command("bind", appid=self._appid, side=self._side) - self._maybe_claim_nameplate() - self._maybe_send_pake() - waiters, self._connection_waiters = self._connection_waiters, [] - for d in waiters: - d.callback(None) - - def _when_connected(self): - if self._connection_state == OPEN: - return defer.succeed(None) - d = defer.Deferred() - self._connection_waiters.append(d) - return d - - def _ws_send_command(self, mtype, **kwargs): - # msgid is used by misc/dump-timing.py to correlate our sends with - # their receives, and vice versa. They are also correlated with the - # ACKs we get back from the server (which we otherwise ignore). There - # are so few messages, 16 bits is enough to be mostly-unique. - if self.DEBUG: print("SEND", mtype) - kwargs["id"] = bytes_to_hexstr(os.urandom(2)) - kwargs["type"] = mtype - payload = dict_to_bytes(kwargs) - self._timing.add("ws_send", _side=self._side, **kwargs) - self._ws.sendMessage(payload, False) - - def _ws_dispatch_response(self, payload): - msg = bytes_to_dict(payload) - if self.DEBUG and msg["type"]!="ack": print("DIS", msg["type"], msg) - self._timing.add("ws_receive", _side=self._side, message=msg) - mtype = msg["type"] - meth = getattr(self, "_response_handle_"+mtype, None) - if not meth: - # make tests fail, but real application will ignore it - log.err(ValueError("Unknown inbound message type %r" % (msg,))) - return - return meth(msg) - - def _response_handle_ack(self, msg): - pass - - def _response_handle_welcome(self, msg): - self._welcomer.handle_welcome(msg["welcome"]) - - # entry point 1: generate a new code - @inlineCallbacks - def _API_get_code(self, code_length): - if self._code is not None: raise InternalError - if self._started_get_code: raise InternalError - self._started_get_code = True - with self._timing.add("API get_code"): - yield self._when_connected() - gc = _GetCode(code_length, self._ws_send_command, self._timing) - self._get_code = gc - self._response_handle_allocated = gc._response_handle_allocated - # TODO: signal_error - code = yield gc.go() - self._get_code = None - self._nameplate_state = OPEN - self._event_learned_code(code) - returnValue(code) - - # entry point 2: interactively type in a code, with completion - @inlineCallbacks - def _API_input_code(self, prompt, code_length): - if self._code is not None: raise InternalError - if self._started_input_code: raise InternalError - self._started_input_code = True - with self._timing.add("API input_code"): - yield self._when_connected() - ic = _InputCode(self._reactor, prompt, code_length, - self._ws_send_command, self._timing, self._stderr) - self._response_handle_nameplates = ic._response_handle_nameplates - # we reveal the Deferred we're waiting on, so _signal_error can - # wake us up if something goes wrong (like a welcome error) - self._input_code_waiter = ic.go() - code = yield self._input_code_waiter - self._input_code_waiter = None - self._event_learned_code(code) - returnValue(None) - - # entry point 3: paste in a fully-formed code - def _API_set_code(self, code): - self._timing.add("API set_code") - if not isinstance(code, type(u"")): - raise TypeError("Unexpected code type '{}'".format(type(code))) - if self._code is not None: - raise InternalError - self._event_learned_code(code) - - # TODO: entry point 4: restore pre-contact saved state (we haven't heard - # from the peer yet, so we still need the nameplate) - - # TODO: entry point 5: restore post-contact saved state (so we don't need - # or use the nameplate, only the mailbox) - def _restore_post_contact_state(self, state): - # ... - self._flag_need_nameplate = False - #self._mailbox_id = X(state) - self._event_learned_mailbox() - - def _event_learned_code(self, code): - self._timing.add("code established") - # bail out early if the password contains spaces... - # this should raise a useful error - if ' ' in code: - raise KeyFormatError("code (%s) contains spaces." % code) - self._code = code - mo = re.search(r'^(\d+)-', code) - if not mo: - raise ValueError("code (%s) must start with NN-" % code) - nid = mo.group(1) - assert isinstance(nid, type("")), type(nid) - self._nameplate_id = nid - # fire more events - self._maybe_build_msg1() - self._event_learned_nameplate() - - def _maybe_build_msg1(self): - if not (self._code and self._flag_need_to_build_msg1): - return - with self._timing.add("pake1", waiting="crypto"): - self._sp = SPAKE2_Symmetric(to_bytes(self._code), - idSymmetric=to_bytes(self._appid)) - self._msg1 = self._sp.start() - self._flag_need_to_build_msg1 = False - self._event_built_msg1() - - def _event_built_msg1(self): - self._maybe_send_pake() - - # every _maybe_X starts with a set of conditions - # for each such condition Y, every _event_Y must call _maybe_X - - def _event_learned_nameplate(self): - self._maybe_claim_nameplate() - - def _maybe_claim_nameplate(self): - if not (self._nameplate_id and self._connection_state == OPEN): - return - self._ws_send_command("claim", nameplate=self._nameplate_id) - self._nameplate_state = OPEN - - def _response_handle_claimed(self, msg): - mailbox_id = msg["mailbox"] - assert isinstance(mailbox_id, type("")), type(mailbox_id) - self._mailbox_id = mailbox_id - self._event_learned_mailbox() - - def _event_learned_mailbox(self): - if not self._mailbox_id: raise InternalError - assert self._mailbox_state == CLOSED, self._mailbox_state - if self._closing: - return - self._ws_send_command("open", mailbox=self._mailbox_id) - self._mailbox_state = OPEN - # causes old messages to be sent now, and subscribes to new messages - self._maybe_send_pake() - self._maybe_send_phase_messages() - - def _maybe_send_pake(self): - # TODO: deal with reentrant call - if not (self._connection_state == OPEN - and self._mailbox_state == OPEN - and self._flag_need_to_send_PAKE): - return - body = {"pake_v1": bytes_to_hexstr(self._msg1)} - payload = dict_to_bytes(body) - self._msg_send("pake", payload) - self._flag_need_to_send_PAKE = False - - def _event_received_pake(self, pake_msg): - payload = bytes_to_dict(pake_msg) - msg2 = hexstr_to_bytes(payload["pake_v1"]) - with self._timing.add("pake2", waiting="crypto"): - self._key = self._sp.finish(msg2) - self._event_established_key() - - def _event_established_key(self): - self._timing.add("key established") - self._maybe_notify_key() - - # both sides send different (random) version messages - self._send_version_message() - - verifier = self._derive_key(b"wormhole:verifier") - self._event_computed_verifier(verifier) - - self._maybe_check_version() - self._maybe_send_phase_messages() - - def _API_establish_key(self): - if self._error: return defer.fail(self._error) - if self._establish_key_called: raise InternalError - self._establish_key_called = True - if self._key is not None: - return defer.succeed(True) - self._key_waiter = defer.Deferred() - return self._key_waiter - - def _maybe_notify_key(self): - if self._key is None: - return - if self._error: - result = failure.Failure(self._error) - else: - result = True - if self._key_waiter and not self._key_waiter.called: - self._key_waiter.callback(result) - - def _send_version_message(self): - # this is encrypted like a normal phase message, and includes a - # dictionary of version flags to let the other Wormhole know what - # we're capable of (for future expansion) - plaintext = dict_to_bytes(self._my_versions) - phase = "version" - data_key = self._derive_phase_key(self._side, phase) - encrypted = self._encrypt_data(data_key, plaintext) - self._msg_send(phase, encrypted) - - def _API_verify(self): - if self._error: return defer.fail(self._error) - if self._get_verifier_called: raise InternalError - self._get_verifier_called = True - if self._verify_result: - return defer.succeed(self._verify_result) # bytes or Failure - self._verifier_waiter = defer.Deferred() - return self._verifier_waiter - - def _event_computed_verifier(self, verifier): - self._verifier = verifier - self._maybe_notify_verify() - - def _maybe_notify_verify(self): - if not (self._verifier and self._version_checked): - return - if self._error: - self._verify_result = failure.Failure(self._error) - else: - self._verify_result = self._verifier - if self._verifier_waiter and not self._verifier_waiter.called: - self._verifier_waiter.callback(self._verify_result) - - def _event_received_version(self, side, body): - # We ought to have the master key by now, because sensible peers - # should always send "pake" before sending "version". It might be - # nice to relax this requirement, which means storing the received - # version message, and having _event_established_key call - # _check_version() - self._version_message = (side, body) - self._maybe_check_version() - - def _maybe_check_version(self): - if not (self._key and self._version_message): - return - if self._version_checked: - return - self._version_checked = True - - side, body = self._version_message - data_key = self._derive_phase_key(side, "version") - try: - plaintext = self._decrypt_data(data_key, body) - except CryptoError: - # this makes all API calls fail - if self.DEBUG: print("CONFIRM FAILED") - self._signal_error(WrongPasswordError(), "scary") - return - msg = bytes_to_dict(plaintext) - self._version_received(msg) - - self._maybe_notify_verify() - - def _version_received(self, msg): - self._their_versions = msg - - def _API_send(self, outbound_data): - if self._error: raise self._error - if not isinstance(outbound_data, type(b"")): - raise TypeError(type(outbound_data)) - phase = self._next_send_phase - self._next_send_phase += 1 - self._plaintext_to_send.append( (phase, outbound_data) ) - with self._timing.add("API send", phase=phase): - self._maybe_send_phase_messages() - - def _derive_phase_key(self, side, phase): - assert isinstance(side, type("")), type(side) - assert isinstance(phase, type("")), type(phase) - side_bytes = side.encode("ascii") - phase_bytes = phase.encode("ascii") - purpose = (b"wormhole:phase:" - + sha256(side_bytes).digest() - + sha256(phase_bytes).digest()) - return self._derive_key(purpose) - - def _maybe_send_phase_messages(self): - # TODO: deal with reentrant call - if not (self._connection_state == OPEN - and self._mailbox_state == OPEN - and self._key): - return - plaintexts = self._plaintext_to_send - self._plaintext_to_send = [] - for pm in plaintexts: - (phase_int, plaintext) = pm - assert isinstance(phase_int, int), type(phase_int) - phase = "%d" % phase_int - data_key = self._derive_phase_key(self._side, phase) - encrypted = self._encrypt_data(data_key, plaintext) - self._msg_send(phase, encrypted) - - def _encrypt_data(self, key, data): - # Without predefined roles, we can't derive predictably unique keys - # for each side, so we use the same key for both. We use random - # nonces to keep the messages distinct, and we automatically ignore - # reflections. - # TODO: HKDF(side, nonce, key) ?? include 'side' to prevent - # reflections, since we no longer compare messages - assert isinstance(key, type(b"")), type(key) - assert isinstance(data, type(b"")), type(data) - assert len(key) == SecretBox.KEY_SIZE, len(key) - box = SecretBox(key) - nonce = utils.random(SecretBox.NONCE_SIZE) - return box.encrypt(data, nonce) - - def _msg_send(self, phase, body): - if phase in self._sent_phases: raise InternalError - assert self._mailbox_state == OPEN, self._mailbox_state - self._sent_phases.add(phase) - # TODO: retry on failure, with exponential backoff. We're guarding - # against the rendezvous server being temporarily offline. - self._timing.add("add", phase=phase) - self._ws_send_command("add", phase=phase, body=bytes_to_hexstr(body)) - - def _event_mailbox_used(self): - if self.DEBUG: print("_event_mailbox_used") - if self._flag_need_to_see_mailbox_used: - self._maybe_release_nameplate() - self._flag_need_to_see_mailbox_used = False - - def _API_derive_key(self, purpose, length): - if self._error: raise self._error - if self._key is None: - raise InternalError # call derive_key after get_verifier() or get() if not isinstance(purpose, type("")): raise TypeError(type(purpose)) - return self._derive_key(to_bytes(purpose), length) + if not self._key: raise NoKeyError() + return derive_key(self._key, to_bytes(purpose), length) - def _derive_key(self, purpose, length=SecretBox.KEY_SIZE): - if not isinstance(purpose, type(b"")): raise TypeError(type(purpose)) - if self._key is None: - raise InternalError # call derive_key after get_verifier() or get() - return HKDF(self._key, length, CTXinfo=purpose) + def close(self): + self._boss.close() - def _response_handle_message(self, msg): - side = msg["side"] - phase = msg["phase"] - assert isinstance(phase, type("")), type(phase) - body = hexstr_to_bytes(msg["body"]) - if side == self._side: + def debug_set_trace(self, client_name, which="B N M S O K R RC L C T", + file=sys.stderr): + self._boss._set_trace(client_name, which, file) + + # from below + def got_code(self, code): + self._delegate.wormhole_code(code) + def got_key(self, key): + self._delegate.wormhole_key() + self._key = key # for derive_key() + def got_verifier(self, verifier): + self._delegate.wormhole_verified(verifier) + def got_version(self, versions): + self._delegate.wormhole_version(versions) + def received(self, plaintext): + self._delegate.wormhole_received(plaintext) + def closed(self, result): + self._delegate.wormhole_closed(result) + +@implementer(IWormhole) +class _DeferredWormhole(object): + def __init__(self): + self._code = None + self._code_observers = [] + self._key = None + self._key_observers = [] + self._verifier = None + self._verifier_observers = [] + self._versions = None + self._version_observers = [] + self._received_data = [] + self._received_observers = [] + self._observer_result = None + self._closed_result = None + self._closed_observers = [] + + def _set_boss(self, boss): + self._boss = boss + + # from above + def when_code(self): + # TODO: consider throwing error unless one of allocate/set/input_code + # was called first. It's legit to grab the Deferred before triggering + # the process that will cause it to fire, but forbidding that + # ordering would make it easier to cause programming errors that + # forget to trigger it entirely. + if self._observer_result is not None: + return defer.fail(self._observer_result) + if self._code is not None: + return defer.succeed(self._code) + d = defer.Deferred() + self._code_observers.append(d) + return d + + def when_key(self): + if self._observer_result is not None: + return defer.fail(self._observer_result) + if self._key is not None: + return defer.succeed(self._key) + d = defer.Deferred() + self._key_observers.append(d) + return d + + def when_verified(self): + if self._observer_result is not None: + return defer.fail(self._observer_result) + if self._verifier is not None: + return defer.succeed(self._verifier) + d = defer.Deferred() + self._verifier_observers.append(d) + return d + + def when_version(self): + if self._observer_result is not None: + return defer.fail(self._observer_result) + if self._versions is not None: + return defer.succeed(self._versions) + d = defer.Deferred() + self._version_observers.append(d) + return d + + def when_received(self): + if self._observer_result is not None: + return defer.fail(self._observer_result) + if self._received_data: + return defer.succeed(self._received_data.pop(0)) + d = defer.Deferred() + self._received_observers.append(d) + return d + + def allocate_code(self, code_length=2): + self._boss.allocate_code(code_length) + def input_code(self): + return self._boss.input_code() + def set_code(self, code): + self._boss.set_code(code) + + # no .serialize in Deferred-mode + def send(self, plaintext): + self._boss.send(plaintext) + + def derive_key(self, purpose, length): + """Derive a new key from the established wormhole channel for some + other purpose. This is a deterministic randomized function of the + session key and the 'purpose' string (unicode/py3-string). This + cannot be called until when_verified() has fired, nor after close() + was called. + """ + if not isinstance(purpose, type("")): raise TypeError(type(purpose)) + if not self._key: raise NoKeyError() + return derive_key(self._key, to_bytes(purpose), length) + + def close(self): + # fails with WormholeError unless we established a connection + # (state=="happy"). Fails with WrongPasswordError (a subclass of + # WormholeError) if state=="scary". + if self._closed_result: + return defer.succeed(self._closed_result) # maybe Failure + d = defer.Deferred() + self._closed_observers.append(d) + self._boss.close() # only need to close if it wasn't already + return d + + def debug_set_trace(self, client_name, which="B N M S O K R RC L C T", + file=sys.stderr): + self._boss._set_trace(client_name, which, file) + + # from below + def got_code(self, code): + self._code = code + for d in self._code_observers: + d.callback(code) + self._code_observers[:] = [] + def got_key(self, key): + self._key = key # for derive_key() + for d in self._key_observers: + d.callback(key) + self._key_observers[:] = [] + def got_verifier(self, verifier): + self._verifier = verifier + for d in self._verifier_observers: + d.callback(verifier) + self._verifier_observers[:] = [] + def got_version(self, versions): + self._versions = versions + for d in self._version_observers: + d.callback(versions) + self._version_observers[:] = [] + + def received(self, plaintext): + if self._received_observers: + self._received_observers.pop(0).callback(plaintext) return - self._event_received_peer_message(side, phase, body) + self._received_data.append(plaintext) - def _event_received_peer_message(self, side, phase, body): - # any message in the mailbox means we no longer need the nameplate - self._event_mailbox_used() + def closed(self, result): + #print("closed", result, type(result)) + if isinstance(result, Exception): + self._observer_result = self._closed_result = failure.Failure(result) + else: + # pending w.key()/w.verify()/w.version()/w.read() get an error + self._observer_result = WormholeClosed(result) + # but w.close() only gets error if we're unhappy + self._closed_result = result + for d in self._key_observers: + d.errback(self._observer_result) + for d in self._verifier_observers: + d.errback(self._observer_result) + for d in self._version_observers: + d.errback(self._observer_result) + for d in self._received_observers: + d.errback(self._observer_result) + for d in self._closed_observers: + d.callback(self._closed_result) - if self._closing: - log.msg("received peer message while closing '%s'" % phase) - if phase in self._received_messages: - log.msg("ignoring duplicate peer message '%s'" % phase) - return - if phase == "pake": - self._received_messages["pake"] = body - return self._event_received_pake(body) - if phase == "version": - self._received_messages["version"] = body - return self._event_received_version(side, body) - if re.search(r'^\d+$', phase): - return self._event_received_phase_message(side, phase, body) - # ignore unrecognized phases, for forwards-compatibility - log.msg("received unknown phase '%s'" % phase) - - def _event_received_phase_message(self, side, phase, body): - # It's a numbered phase message, aimed at the application above us. - # Decrypt and deliver upstairs, notifying anyone waiting on it - try: - data_key = self._derive_phase_key(side, phase) - plaintext = self._decrypt_data(data_key, body) - except CryptoError: - e = WrongPasswordError() - self._signal_error(e, "scary") # flunk all other API calls - # make tests fail, if they aren't explicitly catching it - if self.DEBUG: print("CryptoError in msg received") - log.err(e) - if self.DEBUG: print(" did log.err", e) - return # ignore this message - self._received_messages[phase] = plaintext - if phase in self._receive_waiters: - d = self._receive_waiters.pop(phase) - d.callback(plaintext) - - def _decrypt_data(self, key, encrypted): - assert isinstance(key, type(b"")), type(key) - assert isinstance(encrypted, type(b"")), type(encrypted) - assert len(key) == SecretBox.KEY_SIZE, len(key) - box = SecretBox(key) - data = box.decrypt(encrypted) - return data - - def _API_get(self): - if self._error: return defer.fail(self._error) - phase = "%d" % self._next_receive_phase - self._next_receive_phase += 1 - with self._timing.add("API get", phase=phase): - if phase in self._received_messages: - return defer.succeed(self._received_messages[phase]) - d = self._receive_waiters[phase] = defer.Deferred() - return d - - def _signal_error(self, error, mood): - if self.DEBUG: print("_signal_error", error, mood) - if self._error: - return - self._maybe_close(error, mood) - if self.DEBUG: print("_signal_error done") - - @inlineCallbacks - def _API_close(self, res, mood="happy"): - if self.DEBUG: print("close") - if self._close_called: raise InternalError - self._close_called = True - self._maybe_close(WormholeClosedError(), mood) - if self.DEBUG: print("waiting for disconnect") - yield self._disconnect_waiter - returnValue(res) - - def _maybe_close(self, error, mood): - if self._closing: - return - - # ordering constraints: - # * must wait for nameplate/mailbox acks before closing the websocket - # * must mark APIs for failure before errbacking Deferreds - # * since we give up control - # * must mark self._closing before errbacking Deferreds - # * since caller may call close() when we give up control - # * and close() will reenter _maybe_close - - self._error = error # causes new API calls to fail - - # since we're about to give up control by errbacking any API - # Deferreds, set self._closing, to make sure that a new call to - # close() isn't going to confuse anything - self._closing = True - - # now errback all API deferreds except close(): get_code, - # input_code, verify, get - if self._input_code_waiter and not self._input_code_waiter.called: - self._input_code_waiter.errback(error) - for d in self._connection_waiters: # input_code, get_code (early) - if self.DEBUG: print("EB cw") - d.errback(error) - if self._get_code: # get_code (late) - if self.DEBUG: print("EB gc") - self._get_code._allocated_d.errback(error) - if self._verifier_waiter and not self._verifier_waiter.called: - if self.DEBUG: print("EB VW") - self._verifier_waiter.errback(error) - if self._key_waiter and not self._key_waiter.called: - if self.DEBUG: print("EB KW") - self._key_waiter.errback(error) - for d in self._receive_waiters.values(): - if self.DEBUG: print("EB RW") - d.errback(error) - # Release nameplate and close mailbox, if either was claimed/open. - # Since _closing is True when both ACKs come back, the handlers will - # close the websocket. When *that* finishes, _disconnect_waiter() - # will fire. - self._maybe_release_nameplate() - self._maybe_close_mailbox(mood) - # In the off chance we got closed before we even claimed the - # nameplate, give _maybe_finished_closing a chance to run now. - self._maybe_finished_closing() - - def _maybe_release_nameplate(self): - if self.DEBUG: print("_maybe_release_nameplate", self._nameplate_state) - if self._nameplate_state == OPEN: - if self.DEBUG: print(" sending release") - self._ws_send_command("release") - self._nameplate_state = CLOSING - - def _response_handle_released(self, msg): - self._nameplate_state = CLOSED - self._maybe_finished_closing() - - def _maybe_close_mailbox(self, mood): - if self.DEBUG: print("_maybe_close_mailbox", self._mailbox_state) - if self._mailbox_state == OPEN: - if self.DEBUG: print(" sending close") - self._ws_send_command("close", mood=mood) - self._mailbox_state = CLOSING - - def _response_handle_closed(self, msg): - self._mailbox_state = CLOSED - self._maybe_finished_closing() - - def _maybe_finished_closing(self): - if self.DEBUG: print("_maybe_finished_closing", self._closing, self._nameplate_state, self._mailbox_state, self._connection_state) - if not self._closing: - return - if (self._nameplate_state == CLOSED - and self._mailbox_state == CLOSED - and self._connection_state == OPEN): - self._connection_state = CLOSING - self._drop_connection() - - def _drop_connection(self): - # separate method so it can be overridden by tests - self._ws.transport.loseConnection() # probably flushes output - # calls _ws_closed() when done - - def _ws_closed(self, wasClean, code, reason): - # For now (until we add reconnection), losing the websocket means - # losing everything. Make all API callers fail. Help someone waiting - # in close() to finish - self._connection_state = CLOSED - self._disconnect_waiter.callback(None) - self._maybe_finished_closing() - - # what needs to happen when _ws_closed() happens unexpectedly - # * errback all API deferreds - # * maybe: cause new API calls to fail - # * obviously can't release nameplate or close mailbox - # * can't re-close websocket - # * close(wait=True) callers should fire right away - -def wormhole(appid, relay_url, reactor, tor_manager=None, timing=None, - stderr=sys.stderr): +def create(appid, relay_url, reactor, # use keyword args for everything else + versions={}, + delegate=None, journal=None, tor_manager=None, + timing=None, welcome_handler=None, + stderr=sys.stderr): timing = timing or DebugTiming() - w = _Wormhole(appid, relay_url, reactor, tor_manager, timing, stderr) - w._start() + side = bytes_to_hexstr(os.urandom(5)) + journal = journal or ImmediateJournal() + if not welcome_handler: + welcome_handler = _WelcomeHandler(relay_url).handle_welcome + if delegate: + w = _DelegatedWormhole(delegate) + else: + w = _DeferredWormhole() + wormhole_versions = {} # will be used to indicate Wormhole capabilities + wormhole_versions["app_versions"] = versions # app-specific capabilities + b = Boss(w, side, relay_url, appid, wormhole_versions, + welcome_handler, reactor, journal, + tor_manager, timing) + w._set_boss(b) + b.start() return w -#def wormhole_from_serialized(data, reactor, timing=None): -# timing = timing or DebugTiming() -# w = _Wormhole.from_serialized(data, reactor, timing) -# return w +## def from_serialized(serialized, reactor, delegate, +## journal=None, tor_manager=None, +## timing=None, stderr=sys.stderr): +## assert serialized["serialized_wormhole_version"] == 1 +## timing = timing or DebugTiming() +## w = _DelegatedWormhole(delegate) +## # now unpack state machines, including the SPAKE2 in Key +## b = Boss.from_serialized(w, serialized["boss"], reactor, journal, timing) +## w._set_boss(b) +## b.start() # ?? +## raise NotImplemented +## # should the new Wormhole call got_code? only if it wasn't called before. diff --git a/src/wormhole/xfer_util.py b/src/wormhole/xfer_util.py index a3c3dd0..7dcfba9 100644 --- a/src/wormhole/xfer_util.py +++ b/src/wormhole/xfer_util.py @@ -1,7 +1,7 @@ import json from twisted.internet.defer import inlineCallbacks, returnValue -from .wormhole import wormhole +from . import wormhole from .tor_manager import TorManager from .errors import NoTorError @@ -38,16 +38,17 @@ def receive(reactor, appid, relay_url, code, raise NoTorError() yield tm.start() - wh = wormhole(appid, relay_url, reactor, tor_manager=tm) + wh = wormhole.create(appid, relay_url, reactor, tor_manager=tm) if code is None: - code = yield wh.get_code() + wh.allocate_code() + code = yield wh.when_code() else: wh.set_code(code) # we'll call this no matter what, even if you passed in a code -- # maybe it should be only in the 'if' block above? if on_code: on_code(code) - data = yield wh.get() + data = yield wh.when_received() data = json.loads(data.decode("utf-8")) offer = data.get('offer', None) if not offer: @@ -100,9 +101,10 @@ def send(reactor, appid, relay_url, data, code, if not tm.tor_available(): raise NoTorError() yield tm.start() - wh = wormhole(appid, relay_url, reactor, tor_manager=tm) + wh = wormhole.create(appid, relay_url, reactor, tor_manager=tm) if code is None: - code = yield wh.get_code() + wh.allocate_code() + code = yield wh.when_code() else: wh.set_code(code) if on_code: @@ -115,7 +117,7 @@ def send(reactor, appid, relay_url, data, code, } }).encode("utf-8") ) - data = yield wh.get() + data = yield wh.when_received() data = json.loads(data.decode("utf-8")) answer = data.get('answer', None) yield wh.close()