Merge branch '42-overhaul'
This completely rewrites the client, splitting everything into many (13!) small-ish state machines, merging about 5 months of work. This will enable the following new features (none of which are fully implemented yet): * survive the rendezvous server connection being lost, if we can reconnect * learn the wordlist from the server, after claiming the nameplate, to enable i18n wordlists (sender chooses language, receiver tab-completes on the matching wordlist) * likewise we can learn the code-length (number of words) from the server, although this needs more thought before we can make it safe * new "Delegated Mode" API, with callbacks instead of Deferreds * properly serializable Wormhole state * "journaled mode": synchronzing outbound messages with application state checkpoints to provide robust behavior in the face of frequent and uncoordinated shutdown * making progress even when neither side is connected at the same time * code-completion with non-readline frontends (e.g. GUI wordlist-dropdown) User-visible changes from this rewrite: * wormhole receive: if you use tab-completion, you can only set the nameplate once, after which we've claimed that channel and are stuck with it until the process exits. This means you can't type "5-<TAB><DEL><DEL>3-", because we've already committed to a nameplate of "5". So initial typos are more of a problem now. The client will show you an exception, but then you must Control-C the process to exit. * the "you should upgrade to a newer version" message now overlaps with the code-input prompt, which is annoying (I hope to fix this before a release) * networking problems that prevent a connection to the rendezvous server will cause silent hangs (until I fix this too) New docs: * the docs/ directory now contains descriptions of the various client-to-server and client-to-client protocols we use (none of which changed) * docs/api.md now has a comprehensive description of the API (which is still subject to change) * docs/state-machines/ contains DOT-format descriptions of each new state machine, although running "automat-visualize wormhole" will build more-accurate (but less-informative) diagrams of the actual implementations refs #42
This commit is contained in:
commit
3d89d78ea5
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -58,4 +58,5 @@ target/
|
|||
/twistd.pid
|
||||
/relay.sqlite
|
||||
/misc/node_modules/
|
||||
/docs/events.png
|
||||
/.automat_visualize/
|
||||
/docs/state-machines/*.png
|
||||
|
|
654
docs/api.md
654
docs/api.md
|
@ -1,14 +1,14 @@
|
|||
# Magic-Wormhole
|
||||
|
||||
This library provides a primitive function to securely transfer small amounts
|
||||
This library provides a mechanism to securely transfer small amounts
|
||||
of data between two computers. Both machines must be connected to the
|
||||
internet, but they do not need to have public IP addresses or know how to
|
||||
contact each other ahead of time.
|
||||
|
||||
Security and connectivity is provided by means of an "invitation code": a
|
||||
short string that is transcribed from one machine to the other by the users
|
||||
at the keyboard. This works in conjunction with a baked-in "rendezvous
|
||||
server" that relays information from one machine to the other.
|
||||
Security and connectivity is provided by means of an "wormhole code": a short
|
||||
string that is transcribed from one machine to the other by the users at the
|
||||
keyboard. This works in conjunction with a baked-in "rendezvous server" that
|
||||
relays information from one machine to the other.
|
||||
|
||||
The "Wormhole" object provides a secure record pipe between any two programs
|
||||
that use the same wormhole code (and are configured with the same application
|
||||
|
@ -17,141 +17,64 @@ but the encrypted data for all messages must pass through (and be temporarily
|
|||
stored on) the rendezvous server, which is a shared resource. For this
|
||||
reason, larger data (including bulk file transfers) should use the Transit
|
||||
class instead. The Wormhole object has a method to create a Transit object
|
||||
for this purpose.
|
||||
for this purpose. In the future, Transit will be deprecated, and this
|
||||
functionality will be incorporated directly as a "dilated wormhole".
|
||||
|
||||
A quick example:
|
||||
|
||||
```python
|
||||
import wormhole
|
||||
from twisted.internet.defer import inlineCallbacks
|
||||
|
||||
@inlineCallbacks
|
||||
def go():
|
||||
w = wormhole.create(appid, relay_url, reactor)
|
||||
w.generate_code()
|
||||
code = yield w.when_code()
|
||||
print "code:", code
|
||||
w.send(b"outbound data")
|
||||
inbound = yield w.when_received()
|
||||
yield w.close()
|
||||
```
|
||||
|
||||
## Modes
|
||||
|
||||
This library will eventually offer multiple modes. For now, only "transcribe
|
||||
mode" is available.
|
||||
The API comes in two flavors: Delegated and Deferred. Controlling the
|
||||
Wormhole and sending data is identical in both, but they differ in how
|
||||
inbound data and events are delivered to the application.
|
||||
|
||||
Transcribe mode has two variants. In the "machine-generated" variant, the
|
||||
"initiator" machine creates the invitation code, displays it to the first
|
||||
user, they convey it (somehow) to the second user, who transcribes it into
|
||||
the second ("receiver") machine. In the "human-generated" variant, the two
|
||||
humans come up with the code (possibly without computers), then later
|
||||
transcribe it into both machines.
|
||||
In Delegated mode, the Wormhole is given a "delegate" object, on which
|
||||
certain methods will be called when information is available (e.g. when the
|
||||
code is established, or when data messages are received). In Deferred mode,
|
||||
the Wormhole object has methods which return Deferreds that will fire at
|
||||
these same times.
|
||||
|
||||
When the initiator machine generates the invitation code, the initiator
|
||||
contacts the rendezvous server and allocates a "channel ID", which is a small
|
||||
integer. The initiator then displays the invitation code, which is the
|
||||
channel-ID plus a few secret words. The user copies the code to the second
|
||||
machine. The receiver machine connects to the rendezvous server, and uses the
|
||||
invitation code to contact the initiator. They agree upon an encryption key,
|
||||
and exchange a small encrypted+authenticated data message.
|
||||
|
||||
When the humans create an invitation code out-of-band, they are responsible
|
||||
for choosing an unused channel-ID (simply picking a random 3-or-more digit
|
||||
number is probably enough), and some random words. The invitation code uses
|
||||
the same format in either variant: channel-ID, a hyphen, and an arbitrary
|
||||
string.
|
||||
|
||||
The two machines participating in the wormhole setup are not distinguished:
|
||||
it doesn't matter which one goes first, and both use the same Wormhole class.
|
||||
In the first variant, one side calls `get_code()` while the other calls
|
||||
`set_code()`. In the second variant, both sides call `set_code()`. (Note that
|
||||
this is not true for the "Transit" protocol used for bulk data-transfer: the
|
||||
Transit class currently distinguishes "Sender" from "Receiver", so the
|
||||
programs on each side must have some way to decide ahead of time which is
|
||||
which).
|
||||
|
||||
Each side can then do an arbitrary number of `send()` and `get()` calls.
|
||||
`send()` writes a message into the channel. `get()` waits for a new message
|
||||
to be available, then returns it. The Wormhole is not meant as a long-term
|
||||
communication channel, but some protocols work better if they can exchange an
|
||||
initial pair of messages (perhaps offering some set of negotiable
|
||||
capabilities), and then follow up with a second pair (to reveal the results
|
||||
of the negotiation).
|
||||
|
||||
Note: the application developer must be careful to avoid deadlocks (if both
|
||||
sides want to `get()`, somebody has to `send()` first).
|
||||
|
||||
When both sides are done, they must call `close()`, to flush all pending
|
||||
`send()` calls, deallocate the channel, and close the websocket connection.
|
||||
|
||||
## Twisted
|
||||
|
||||
The Twisted-friendly flow looks like this (note that passing `reactor` is how
|
||||
you get a non-blocking Wormhole):
|
||||
Delegated mode:
|
||||
|
||||
```python
|
||||
from twisted.internet import reactor
|
||||
from wormhole.public_relay import RENDEZVOUS_RELAY
|
||||
from wormhole import wormhole
|
||||
w1 = wormhole(u"appid", RENDEZVOUS_RELAY, reactor)
|
||||
d = w1.get_code()
|
||||
def _got_code(code):
|
||||
print "Invitation Code:", code
|
||||
return w1.send(b"outbound data")
|
||||
d.addCallback(_got_code)
|
||||
d.addCallback(lambda _: w1.get())
|
||||
def _got(inbound_message):
|
||||
print "Inbound message:", inbound_message
|
||||
d.addCallback(_got)
|
||||
d.addCallback(w1.close)
|
||||
d.addBoth(lambda _: reactor.stop())
|
||||
reactor.run()
|
||||
class MyDelegate:
|
||||
def wormhole_got_code(self, code):
|
||||
print("code: %s" % code)
|
||||
def wormhole_received(self, data): # called for each message
|
||||
print("got data, %d bytes" % len(data))
|
||||
|
||||
w = wormhole.create(appid, relay_url, reactor, delegate=MyDelegate())
|
||||
w.generate_code()
|
||||
```
|
||||
|
||||
On the other side, you call `set_code()` instead of waiting for `get_code()`:
|
||||
Deferred mode:
|
||||
|
||||
```python
|
||||
w2 = wormhole(u"appid", RENDEZVOUS_RELAY, reactor)
|
||||
w2.set_code(code)
|
||||
d = w2.send(my_message)
|
||||
...
|
||||
w = wormhole.create(appid, relay_url, reactor)
|
||||
w.generate_code()
|
||||
def print_code(code):
|
||||
print("code: %s" % code)
|
||||
w.when_code().addCallback(print_code)
|
||||
def received(data):
|
||||
print("got data, %d bytes" % len(data))
|
||||
w.when_received().addCallback(received) # gets exactly one message
|
||||
```
|
||||
|
||||
Note that the Twisted-form `close()` accepts (and returns) an optional
|
||||
argument, so you can use `d.addCallback(w.close)` instead of
|
||||
`d.addCallback(lambda _: w.close())`.
|
||||
|
||||
## Verifier
|
||||
|
||||
For extra protection against guessing attacks, Wormhole can provide a
|
||||
"Verifier". This is a moderate-length series of bytes (a SHA256 hash) that is
|
||||
derived from the supposedly-shared session key. If desired, both sides can
|
||||
display this value, and the humans can manually compare them before allowing
|
||||
the rest of the protocol to proceed. If they do not match, then the two
|
||||
programs are not talking to each other (they may both be talking to a
|
||||
man-in-the-middle attacker), and the protocol should be abandoned.
|
||||
|
||||
To retrieve the verifier, you call `d=w.verify()` before any calls to
|
||||
`send()/get()`. The Deferred will not fire until internal key-confirmation
|
||||
has taken place (meaning the two sides have exchanged their initial PAKE
|
||||
messages, and the wormhole codes matched), so `verify()` is also a good way
|
||||
to detect typos or mistakes entering the code. The Deferred will errback with
|
||||
wormhole.WrongPasswordError if the codes did not match, or it will callback
|
||||
with the verifier bytes if they did match.
|
||||
|
||||
Once retrieved, you can turn this into hex or Base64 to print it, or render
|
||||
it as ASCII-art, etc. Once the users are convinced that `verify()` from both
|
||||
sides are the same, call `send()/get()` to continue the protocol. If you call
|
||||
`send()/get()` before `verify()`, it will perform the complete protocol
|
||||
without pausing.
|
||||
|
||||
## Generating the Invitation Code
|
||||
|
||||
In most situations, the "sending" or "initiating" side will call `get_code()`
|
||||
to generate the invitation code. This returns a string in the form
|
||||
`NNN-code-words`. The numeric "NNN" prefix is the "channel id", and is a
|
||||
short integer allocated by talking to the rendezvous server. The rest is a
|
||||
randomly-generated selection from the PGP wordlist, providing a default of 16
|
||||
bits of entropy. The initiating program should display this code to the user,
|
||||
who should transcribe it to the receiving user, who gives it to the Receiver
|
||||
object by calling `set_code()`. The receiving program can also use
|
||||
`input_code()` to use a readline-based input function: this offers tab
|
||||
completion of allocated channel-ids and known codewords.
|
||||
|
||||
Alternatively, the human users can agree upon an invitation code themselves,
|
||||
and provide it to both programs later (both sides call `set_code()`). They
|
||||
should choose a channel-id that is unlikely to already be in use (3 or more
|
||||
digits are recommended), append a hyphen, and then include randomly-selected
|
||||
words or characters. Dice, coin flips, shuffled cards, or repeated sampling
|
||||
of a high-resolution stopwatch are all useful techniques.
|
||||
|
||||
Note that the code is a human-readable string (the python "unicode" type in
|
||||
python2, "str" in python3).
|
||||
|
||||
## Application Identifier
|
||||
|
||||
Applications using this library must provide an "application identifier", a
|
||||
|
@ -167,18 +90,464 @@ ten Wormholes are active for a given app-id, the connection-id will only need
|
|||
to contain a single digit, even if some other app-id is currently using
|
||||
thousands of concurrent sessions.
|
||||
|
||||
## Rendezvous Relays
|
||||
## Rendezvous Servers
|
||||
|
||||
The library depends upon a "rendezvous relay", which is a server (with a
|
||||
The library depends upon a "rendezvous server", which is a service (on a
|
||||
public IP address) that delivers small encrypted messages from one client to
|
||||
the other. This must be the same for both clients, and is generally baked-in
|
||||
to the application source code or default config.
|
||||
|
||||
This library includes the URL of a public relay run by the author.
|
||||
Application developers can use this one, or they can run their own (see the
|
||||
`wormhole-server` command and the `src/wormhole/server/` directory) and
|
||||
configure their clients to use it instead. This URL is passed as a unicode
|
||||
string.
|
||||
This library includes the URL of a public rendezvous server run by the
|
||||
author. Application developers can use this one, or they can run their own
|
||||
(see the `wormhole-server` command and the `src/wormhole/server/` directory)
|
||||
and configure their clients to use it instead. This URL is passed as a
|
||||
unicode string. Note that because the server actually speaks WebSockets, the
|
||||
URL starts with `ws:` instead of `http:`.
|
||||
|
||||
## Wormhole Parameters
|
||||
|
||||
All wormholes must be created with at least three parameters:
|
||||
|
||||
* `appid`: a (unicode) string
|
||||
* `relay_url`: a (unicode) string
|
||||
* `reactor`: the Twisted reactor object
|
||||
|
||||
In addition to these three, the `wormhole.create()` function takes several
|
||||
optional arguments:
|
||||
|
||||
* `delegate`: provide a Delegate object to enable "delegated mode", or pass
|
||||
None (the default) to get "deferred mode"
|
||||
* `journal`: provide a Journal object to enable journaled mode. See
|
||||
journal.md for details. Note that journals only work with delegated mode,
|
||||
not with deferred mode.
|
||||
* `tor_manager`: to enable Tor support, create a `wormhole.TorManager`
|
||||
instance and pass it here. This will hide the client's IP address by
|
||||
proxying all connections (rendezvous and transit) through Tor. It also
|
||||
enables connecting to Onion-service transit hints, and (in the future) will
|
||||
enable the creation of Onion-services for transit purposes.
|
||||
* `timing`: this accepts a DebugTiming instance, mostly for internal
|
||||
diagnostic purposes, to record the transmit/receive timestamps for all
|
||||
messages. The `wormhole --dump-timing=` feature uses this to build a
|
||||
JSON-format data bundle, and the `misc/dump-timing.py` tool can build a
|
||||
scrollable timing diagram from these bundles.
|
||||
* `welcome_handler`: this is a function that will be called when the
|
||||
Rendezvous Server's "welcome" message is received. It is used to display
|
||||
important server messages in an application-specific way.
|
||||
* `versions`: this can accept a dictionary (JSON-encodable) of data that will
|
||||
be made available to the peer via the `got_version` event. This data is
|
||||
delivered before any data messages, and can be used to indicate peer
|
||||
capabilities.
|
||||
|
||||
## Code Management
|
||||
|
||||
Each wormhole connection is defined by a shared secret "wormhole code". These
|
||||
codes can be generated offline (by picking a unique number and some secret
|
||||
words), but are more commonly generated by whoever creates the first
|
||||
wormhole. In the "bin/wormhole" file-transfer tool, the default behavior is
|
||||
for the sender to create the code, and for the receiver to type it in.
|
||||
|
||||
The code is a (unicode) string in the form `NNN-code-words`. The numeric
|
||||
"NNN" prefix is the "channel id" or "nameplate", and is a short integer
|
||||
allocated by talking to the rendezvous server. The rest is a
|
||||
randomly-generated selection from the PGP wordlist, providing a default of 16
|
||||
bits of entropy. The initiating program should display this code to the user,
|
||||
who should transcribe it to the receiving user, who gives it to their local
|
||||
Wormhole object by calling `set_code()`. The receiving program can also use
|
||||
`input_code()` to use a readline-based input function: this offers tab
|
||||
completion of allocated channel-ids and known codewords.
|
||||
|
||||
The Wormhole object has three APIs for generating or accepting a code:
|
||||
|
||||
* `w.generate_code(length=2)`: this contacts the Rendezvous Server, allocates
|
||||
a short numeric nameplate, chooses a configurable number of random words,
|
||||
then assembles them into the code
|
||||
* `w.set_code(code)`: this accepts the code as an argument
|
||||
* `helper = w.input_code()`: this facilitates interactive entry of the code,
|
||||
with tab-completion. The helper object has methods to return a list of
|
||||
viable completions for whatever portion of the code has been entered so
|
||||
far. A convenience wrapper is provided to attach this to the `rlcompleter`
|
||||
function of libreadline.
|
||||
|
||||
No matter which mode is used, the `w.when_code()` Deferred (or
|
||||
`delegate.wormhole_got_code(code)` callback) will fire when the code is
|
||||
known. `when_code` is clearly necessary for `generate_code`, since there's no
|
||||
other way to learn what code was created, but it may be useful in other modes
|
||||
for consistency.
|
||||
|
||||
The code-entry Helper object has the following API:
|
||||
|
||||
* `refresh_nameplates()`: requests an updated list of nameplates from the
|
||||
Rendezvous Server. These form the first portion of the wormhole code (e.g.
|
||||
"4" in "4-purple-sausages"). Note that they are unicode strings (so "4",
|
||||
not 4). The Helper will get the response in the background, and calls to
|
||||
`get_nameplate_completions()` after the response will use the new list.
|
||||
Calling this after `h.choose_nameplate` will raise
|
||||
`AlreadyChoseNameplateError`.
|
||||
* `matches = h.get_nameplate_completions(prefix)`: returns (synchronously) a
|
||||
set of completions for the given nameplate prefix, along with the hyphen
|
||||
that always follows the nameplate (and separates the nameplate from the
|
||||
rest of the code). For example, if the server reports nameplates 1, 12, 13,
|
||||
24, and 170 are in use, `get_nameplate_completions("1")` will return
|
||||
`{"1-", "12-", "13-", "170-"}`. You may want to sort these before
|
||||
displaying them to the user. Raises `AlreadyChoseNameplateError` if called
|
||||
after `h.choose_nameplate`.
|
||||
* `h.choose_nameplate(nameplate)`: accepts a string with the chosen
|
||||
nameplate. May only be called once, after which
|
||||
`AlreadyChoseNameplateError` is raised. (in this future, this might
|
||||
return a Deferred that fires (with None) when the nameplate's wordlist is
|
||||
known (which happens after the nameplate is claimed, requiring a roundtrip
|
||||
to the server)).
|
||||
* `d = h.when_wordlist_is_available()`: return a Deferred that fires (with
|
||||
None) when the wordlist is known. This can be used to block a readline
|
||||
frontend which has just called `h.choose_nameplate()` until the resulting
|
||||
wordlist is known, which can improve the tab-completion behavior.
|
||||
* `matches = h.get_word_completions(prefix)`: return (synchronously) a set of
|
||||
completions for the given words prefix. This will include a trailing hyphen
|
||||
if more words are expected. The possible completions depend upon the
|
||||
wordlist in use for the previously-claimed nameplate, so calling this
|
||||
before `choose_nameplate` will raise `MustChooseNameplateFirstError`.
|
||||
Calling this after `h.choose_words()` will raise `AlreadyChoseWordsError`.
|
||||
Given a prefix like "su", this returns a set of strings which are potential
|
||||
matches (e.g. `{"supportive-", "surrender-", "suspicious-"}`. The prefix
|
||||
should not include the nameplate, but *should* include whatever words and
|
||||
hyphens have been typed so far (the default wordlist uses alternate lists,
|
||||
where even numbered words have three syllables, and odd numbered words have
|
||||
two, so the completions depend upon how many words are present, not just
|
||||
the partial last word). E.g. `get_word_completions("pr")` will return
|
||||
`{"processor-", "provincial-", "proximate-"}`, while
|
||||
`get_word_completions("opulent-pr")` will return `{"opulent-preclude",
|
||||
"opulent-prefer", "opulent-preshrunk", "opulent-printer",
|
||||
"opulent-prowler"}` (note the lack of a trailing hyphen, because the
|
||||
wordlist is expecting a code of length two). If the wordlist is not yet
|
||||
known, this returns an empty set. All return values will
|
||||
`.startwith(prefix)`. The frontend is responsible for sorting the results
|
||||
before display.
|
||||
* `h.choose_words(words)`: call this when the user is finished typing in the
|
||||
code. It does not return anything, but will cause the Wormhole's
|
||||
`w.when_code()` (or corresponding delegate) to fire, and triggers the
|
||||
wormhole connection process. This accepts a string like "purple-sausages",
|
||||
without the nameplate. It must be called after `h.choose_nameplate()` or
|
||||
`MustChooseNameplateFirstError` will be raised. May only be called once,
|
||||
after which `AlreadyChoseWordsError` is raised.
|
||||
|
||||
The `input_with_completion` wrapper is a function that knows how to use the
|
||||
code-entry helper to do tab completion of wormhole codes:
|
||||
|
||||
```python
|
||||
from wormhole import create, input_with_completion
|
||||
w = create(appid, relay_url, reactor)
|
||||
input_with_completion("Wormhole code:", w.input_code(), reactor)
|
||||
d = w.when_code()
|
||||
```
|
||||
|
||||
This helper runs python's (raw) `input()` function inside a thread, since
|
||||
`input()` normally blocks.
|
||||
|
||||
The two machines participating in the wormhole setup are not distinguished:
|
||||
it doesn't matter which one goes first, and both use the same Wormhole
|
||||
constructor function. However if `w.generate_code()` is used, only one side
|
||||
should use it.
|
||||
|
||||
## Offline Codes
|
||||
|
||||
In most situations, the "sending" or "initiating" side will call
|
||||
`w.generate_code()` and display the resulting code. The sending human reads
|
||||
it and speaks, types, performs charades, or otherwise transmits the code to
|
||||
the receiving human. The receiving human then types it into the receiving
|
||||
computer, where it either calls `w.set_code()` (if the code is passed in via
|
||||
argv) or `w.input_code()` (for interactive entry).
|
||||
|
||||
Usually one machine generates the code, and a pair of humans transcribes it
|
||||
to the second machine (so `w.generate_code()` on one side, and `w.set_code()`
|
||||
or `w.input_code()` on the other). But it is also possible for the humans to
|
||||
generate the code offline, perhaps at a face-to-face meeting, and then take
|
||||
the code back to their computers. In this case, `w.set_code()` will be used
|
||||
on both sides. It is unlikely that the humans will restrict themselves to a
|
||||
pre-established wordlist when manually generating codes, so the completion
|
||||
feature of `w.input_code()` is not helpful.
|
||||
|
||||
When the humans create an invitation code out-of-band, they are responsible
|
||||
for choosing an unused channel-ID (simply picking a random 3-or-more digit
|
||||
number is probably enough), and some random words. Dice, coin flips, shuffled
|
||||
cards, or repeated sampling of a high-resolution stopwatch are all useful
|
||||
techniques. The invitation code uses the same format either way: channel-ID,
|
||||
a hyphen, and an arbitrary string. There is no need to encode the sampled
|
||||
random values (e.g. by using the Diceware wordlist) unless that makes it
|
||||
easier to transcribe: e.g. rolling 6 dice could result in a code like
|
||||
"913-166532", and flipping 16 coins could result in "123-HTTHHHTTHTTHHTHH".
|
||||
|
||||
## Verifier
|
||||
|
||||
For extra protection against guessing attacks, Wormhole can provide a
|
||||
"Verifier". This is a moderate-length series of bytes (a SHA256 hash) that is
|
||||
derived from the supposedly-shared session key. If desired, both sides can
|
||||
display this value, and the humans can manually compare them before allowing
|
||||
the rest of the protocol to proceed. If they do not match, then the two
|
||||
programs are not talking to each other (they may both be talking to a
|
||||
man-in-the-middle attacker), and the protocol should be abandoned.
|
||||
|
||||
Deferred-mode applications can wait for `d=w.when_verified()`: the Deferred
|
||||
it returns will fire with the verifier. You can turn this into hex or Base64
|
||||
to print it, or render it as ASCII-art, etc.
|
||||
|
||||
Asking the wormhole object for the verifier does not affect the flow of the
|
||||
protocol. To benefit from verification, applications must refrain from
|
||||
sending any data (with `w.send(data)`) until after the verifiers are approved
|
||||
by the user. In addition, applications must queue or otherwise ignore
|
||||
incoming (received) messages until that point. However once the verifiers are
|
||||
confirmed, previously-received messages can be considered valid and processed
|
||||
as usual.
|
||||
|
||||
## Welcome Messages
|
||||
|
||||
The first message sent by the rendezvous server is a "welcome" message (a
|
||||
dictionary). Clients should not wait for this message, but when it arrives,
|
||||
they should process the keys it contains.
|
||||
|
||||
The welcome message serves three main purposes:
|
||||
|
||||
* notify users about important server changes, such as CAPTCHA requirements
|
||||
driven by overload, or donation requests
|
||||
* enable future protocol negotiation between clients and the server
|
||||
* advise users of the CLI tools (`wormhole send`) to upgrade to a new version
|
||||
|
||||
There are three keys currently defined for the welcome message, all of which
|
||||
are optional (the welcome message omits "error" and "motd" unless the server
|
||||
operator needs to signal a problem).
|
||||
|
||||
* `motd`: if this key is present, it will be a string with embedded newlines.
|
||||
The client should display this string to the user, including a note that it
|
||||
comes from the magic-wormhole Rendezvous Server and that server's URL.
|
||||
* `error`: if present, the server has decided it cannot service this client.
|
||||
The string will be wrapped in a `WelcomeError` (which is a subclass of
|
||||
`WormholeError`), and all API calls will signal errors (pending Deferreds
|
||||
will errback). The rendezvous connection will be closed.
|
||||
* `current_cli_version`: if present, the server is advising instances of the
|
||||
CLI tools (the `wormhole` command included in the python distribution) that
|
||||
there is a newer release available, thus users should upgrade if they can,
|
||||
because more features will be available if both clients are running the
|
||||
same version. The CLI tools compare this string against their `__version__`
|
||||
and can print a short message to stderr if an upgrade is warranted.
|
||||
|
||||
There is currently no facility in the server to actually send `motd`, but a
|
||||
static `error` string can be included by running the server with
|
||||
`--signal-error=MESSAGE`.
|
||||
|
||||
The main idea of `error` is to allow the server to cleanly inform the client
|
||||
about some necessary action it didn't take. The server currently sends the
|
||||
welcome message as soon as the client connects (even before it receives the
|
||||
"claim" request), but a future server could wait for a required client
|
||||
message and signal an error (via the Welcome message) if it didn't see this
|
||||
extra message before the CLAIM arrived.
|
||||
|
||||
This could enable changes to the protocol, e.g. requiring a CAPTCHA or
|
||||
proof-of-work token when the server is under DoS attack. The new server would
|
||||
send the current requirements in an initial message (which old clients would
|
||||
ignore). New clients would be required to send the token before their "claim"
|
||||
message. If the server sees "claim" before "token", it knows that the client
|
||||
is too old to know about this protocol, and it could send a "welcome" with an
|
||||
`error` field containing instructions (explaining to the user that the server
|
||||
is under attack, and they must either upgrade to a client that can speak the
|
||||
new protocol, or wait until the attack has passed). Either case is better
|
||||
than an opaque exception later when the required message fails to arrive.
|
||||
|
||||
(Note that the server can also send an explicit ERROR message at any time,
|
||||
and the client should react with a ServerError. Versions 0.9.2 and earlier of
|
||||
the library did not pay attention to the ERROR message, hence the server
|
||||
should deliver errors in a WELCOME message if at all possible)
|
||||
|
||||
The `error` field is handled internally by the Wormhole object. The other
|
||||
fields are processed by an application-supplied "welcome handler" function,
|
||||
supplied as an argument to the `wormhole()` constructor. This function will
|
||||
be called with the full welcome dictionary, so any other keys that a future
|
||||
server might send will be available to it. If the welcome handler raises
|
||||
`WelcomeError`, the connection will be closed just as if an `error` key had
|
||||
been received. The handler may be called multiple times (once per connection,
|
||||
if the rendezvous connection is lost and then reestablished), so applications
|
||||
should avoid presenting the user with redundant messages.
|
||||
|
||||
The default welcome handler will print `motd` to stderr, and will ignore
|
||||
`current_cli_version`.
|
||||
|
||||
## Events
|
||||
|
||||
As the wormhole connection is established, several events may be dispatched
|
||||
to the application. In Delegated mode, these are dispatched by calling
|
||||
functions on the delegate object. In Deferred mode, the application retrieves
|
||||
Deferred objects from the wormhole, and event dispatch is performed by firing
|
||||
those Deferreds.
|
||||
|
||||
* got_code (`yield w.when_code()` / `dg.wormhole_code(code)`): fired when the
|
||||
wormhole code is established, either after `w.generate_code()` finishes the
|
||||
generation process, or when the Input Helper returned by `w.input_code()`
|
||||
has been told `h.set_words()`, or immediately after `w.set_code(code)` is
|
||||
called. This is most useful after calling `w.generate_code()`, to show the
|
||||
generated code to the user so they can transcribe it to their peer.
|
||||
* key (`yield w.when_key()` / `dg.wormhole_key()`): fired when the
|
||||
key-exchange process has completed and a purported shared key is
|
||||
established. At this point we do not know that anyone else actually shares
|
||||
this key: the peer may have used the wrong code, or may have disappeared
|
||||
altogether. To wait for proof that the key is shared, wait for
|
||||
`when_verified` instead. This event is really only useful for detecting
|
||||
that the initiating peer has disconnected after leaving the initial PAKE
|
||||
message, to display a pacifying message to the user.
|
||||
* verified (`verifier = yield w.when_verified()` /
|
||||
`dg.wormhole_verified(verifier)`: fired when the key-exchange process has
|
||||
completed and a valid VERSION message has arrived. The "verifier" is a byte
|
||||
string with a hash of the shared session key; clients can compare them
|
||||
(probably as hex) to ensure that they're really talking to each other, and
|
||||
not to a man-in-the-middle. When `got_verifier` happens, this side knows
|
||||
that *someone* has used the correct wormhole code; if someone used the
|
||||
wrong code, the VERSION message cannot be decrypted, and the wormhole will
|
||||
be closed instead.
|
||||
* version (`yield w.when_version()` / `dg.wormhole_version(versions)`: fired
|
||||
when the VERSION message arrives from the peer. This fires at the same time
|
||||
as `verified`, but delivers the "app_versions" data (as passed into
|
||||
`wormhole.create(versions=)`) instead of the verifier string.
|
||||
* received (`yield w.when_received()` / `dg.wormhole_received(data)`: fired
|
||||
each time a data message arrives from the peer, with the bytestring that
|
||||
the peer passed into `w.send(data)`.
|
||||
* closed (`yield w.close()` / `dg.wormhole_closed(result)`: fired when
|
||||
`w.close()` has finished shutting down the wormhole, which means all
|
||||
nameplates and mailboxes have been deallocated, and the WebSocket
|
||||
connection has been closed. This also fires if an internal error occurs
|
||||
(specifically WrongPasswordError, which indicates that an invalid encrypted
|
||||
message was received), which also shuts everything down. The `result` value
|
||||
is an exception (or Failure) object if the wormhole closed badly, or a
|
||||
string like "happy" if it had no problems before shutdown.
|
||||
|
||||
## Sending Data
|
||||
|
||||
The main purpose of a Wormhole is to send data. At any point after
|
||||
construction, callers can invoke `w.send(data)`. This will queue the message
|
||||
if necessary, but (if all goes well) will eventually result in the peer
|
||||
getting a `received` event and the data being delivered to the application.
|
||||
|
||||
Since Wormhole provides an ordered record pipe, each call to `w.send` will
|
||||
result in exactly one `received` event on the far side. Records are not
|
||||
split, merged, dropped, or reordered.
|
||||
|
||||
Each side can do an arbitrary number of `send()` calls. The Wormhole is not
|
||||
meant as a long-term communication channel, but some protocols work better if
|
||||
they can exchange an initial pair of messages (perhaps offering some set of
|
||||
negotiable capabilities), and then follow up with a second pair (to reveal
|
||||
the results of the negotiation). The Rendezvous Server does not currently
|
||||
enforce any particular limits on number of messages, size of messages, or
|
||||
rate of transmission, but in general clients are expected to send fewer than
|
||||
a dozen messages, of no more than perhaps 20kB in size (remember that all
|
||||
these messages are temporarily stored in a SQLite database on the server). A
|
||||
future version of the protocol may make these limits more explicit, and will
|
||||
allow clients to ask for greater capacity when they connect (probably by
|
||||
passing additional "mailbox attribute" parameters with the
|
||||
`allocate`/`claim`/`open` messages).
|
||||
|
||||
For bulk data transfer, see "transit.md", or the "Dilation" section below.
|
||||
|
||||
## Closing
|
||||
|
||||
When the application is done with the wormhole, it should call `w.close()`,
|
||||
and wait for a `closed` event. This ensures that all server-side resources
|
||||
are released (allowing the nameplate to be re-used by some other client), and
|
||||
all network sockets are shut down.
|
||||
|
||||
In Deferred mode, this just means waiting for the Deferred returned by
|
||||
`w.close()` to fire. In Delegated mode, this means calling `w.close()` (which
|
||||
doesn't return anything) and waiting for the delegate's `wormhole_closed()`
|
||||
method to be called.
|
||||
|
||||
`w.close()` will errback (with some form of `WormholeError`) if anything went
|
||||
wrong with the process, such as:
|
||||
|
||||
* `WelcomeError`: the server told us to signal an error, probably because the
|
||||
client is too old understand some new protocol feature
|
||||
* `ServerError`: the server rejected something we did
|
||||
* `LonelyError`: we didn't hear from the other side, so no key was
|
||||
established
|
||||
* `WrongPasswordError`: we received at least one incorrectly-encrypted
|
||||
message. This probably indicates that the other side used a different
|
||||
wormhole code than we did, perhaps because of a typo, or maybe an attacker
|
||||
tried to guess your code and failed.
|
||||
|
||||
If the wormhole was happy at the time it was closed, the `w.close()` Deferred
|
||||
will callback (probably with the string "happy", but this may change in the
|
||||
future).
|
||||
|
||||
## Serialization
|
||||
|
||||
(NOTE: this section is speculative: this code has not yet been written)
|
||||
|
||||
Wormhole objects can be serialized. This can be useful for apps which save
|
||||
their own state before shutdown, and restore it when they next start up
|
||||
again.
|
||||
|
||||
|
||||
The `w.serialize()` method returns a dictionary which can be JSON encoded
|
||||
into a unicode string (most applications will probably want to UTF-8 -encode
|
||||
this into a bytestring before saving on disk somewhere).
|
||||
|
||||
To restore a Wormhole, call `wormhole.from_serialized(data, reactor,
|
||||
delegate)`. This will return a wormhole in roughly the same state as was
|
||||
serialized (of course all the network connections will be disconnected).
|
||||
|
||||
Serialization only works for delegated-mode wormholes (since Deferreds point
|
||||
at functions, which cannot be serialized easily). It also only works for
|
||||
"non-dilated" wormholes (see below).
|
||||
|
||||
To ensure correct behavior, serialization should probably only be done in
|
||||
"journaled mode". See journal.md for details.
|
||||
|
||||
If you use serialization, be careful to never use the same partial wormhole
|
||||
object twice.
|
||||
|
||||
## Dilation
|
||||
|
||||
(NOTE: this section is speculative: this code has not yet been written)
|
||||
|
||||
In the longer term, the Wormhole object will incorporate the "Transit"
|
||||
functionality (see transit.md) directly, removing the need to instantiate a
|
||||
second object. A Wormhole can be "dilated" into a form that is suitable for
|
||||
bulk data transfer.
|
||||
|
||||
All wormholes start out "undilated". In this state, all messages are queued
|
||||
on the Rendezvous Server for the lifetime of the wormhole, and server-imposed
|
||||
number/size/rate limits apply. Calling `w.dilate()` initiates the dilation
|
||||
process, and success is signalled via either `d=w.when_dilated()` firing, or
|
||||
`dg.wormhole_dilated()` being called. Once dilated, the Wormhole can be used
|
||||
as an IConsumer/IProducer, and messages will be sent on a direct connection
|
||||
(if possible) or through the transit relay (if not).
|
||||
|
||||
What's good about a non-dilated wormhole?:
|
||||
|
||||
* setup is faster: no delay while it tries to make a direct connection
|
||||
* survives temporary network outages, since messages are queued
|
||||
* works with "journaled mode", allowing progress to be made even when both
|
||||
sides are never online at the same time, by serializing the wormhole
|
||||
|
||||
What's good about dilated wormholes?:
|
||||
|
||||
* they support bulk data transfer
|
||||
* you get flow control (backpressure), and IProducer/IConsumer
|
||||
* throughput is faster: no store-and-forward step
|
||||
|
||||
Use non-dilated wormholes when your application only needs to exchange a
|
||||
couple of messages, for example to set up public keys or provision access
|
||||
tokens. Use a dilated wormhole to move large files.
|
||||
|
||||
Dilated wormholes can provide multiple "channels": these are multiplexed
|
||||
through the single (encrypted) TCP connection. Each channel is a separate
|
||||
stream (offering IProducer/IConsumer)
|
||||
|
||||
To create a channel, call `c = w.create_channel()` on a dilated wormhole. The
|
||||
"channel ID" can be obtained with `c.get_id()`. This ID will be a short
|
||||
(unicode) string, which can be sent to the other side via a normal
|
||||
`w.send()`, or any other means. On the other side, use `c =
|
||||
w.open_channel(channel_id)` to get a matching channel object.
|
||||
|
||||
Then use `c.send(data)` and `d=c.when_received()` to exchange data, or wire
|
||||
them up with `c.registerProducer()`. Note that channels do not close until
|
||||
the wormhole connection is closed, so they do not have separate `close()`
|
||||
methods or events. Therefore if you plan to send files through them, you'll
|
||||
need to inform the recipient ahead of time about how many bytes to expect.
|
||||
|
||||
## Bytes, Strings, Unicode, and Python 3
|
||||
|
||||
|
@ -198,3 +567,20 @@ in python3):
|
|||
* transit connection hints (e.g. "host:port")
|
||||
* application identifier
|
||||
* derived-key "purpose" string: `w.derive_key(PURPOSE, LENGTH)`
|
||||
|
||||
## Full API list
|
||||
|
||||
action | Deferred-Mode | Delegated-Mode
|
||||
------------------ | -------------------- | --------------
|
||||
w.generate_code() | |
|
||||
w.set_code(code) | |
|
||||
h=w.input_code() | |
|
||||
. | d=w.when_code() | dg.wormhole_code(code)
|
||||
. | d=w.when_verified() | dg.wormhole_verified(verifier)
|
||||
. | d=w.when_version() | dg.wormhole_version(version)
|
||||
w.send(data) | |
|
||||
. | d=w.when_received() | dg.wormhole_received(data)
|
||||
key=w.derive_key(purpose, length) | |
|
||||
w.close() | | dg.wormhole_closed(result)
|
||||
. | d=w.close() |
|
||||
|
||||
|
|
63
docs/client-protocol.md
Normal file
63
docs/client-protocol.md
Normal file
|
@ -0,0 +1,63 @@
|
|||
# Client-to-Client Protocol
|
||||
|
||||
Wormhole clients do not talk directly to each other (at least at first): they
|
||||
only connect directly to the Rendezvous Server. They ask this server to
|
||||
convey messages to the other client (via the `add` command and the `message`
|
||||
response). This document explains the format of these client-to-client
|
||||
messages.
|
||||
|
||||
Each such message contains a "phase" string, and a hex-encoded binary "body".
|
||||
|
||||
Any phase which is purely numeric (`^\d+$`) is reserved for application data,
|
||||
and will be delivered in numeric order. All other phases are reserved for the
|
||||
Wormhole client itself. Clients will ignore any phase they do not recognize.
|
||||
|
||||
Immediately upon opening the mailbox, clients send the `pake` phase, which
|
||||
contains the binary SPAKE2 message (the one computed as `X+M*pw` or
|
||||
`Y+N*pw`).
|
||||
|
||||
Upon receiving their peer's `pake` phase, clients compute and remember the
|
||||
shared key. They derive the "verifier" (a hash of the shared key) and deliver
|
||||
it to the application by calling `got_verifier`: applications can display
|
||||
this to users who want additional assurance (by manually comparing the values
|
||||
from both sides: they ought to be identical). At this point clients also send
|
||||
the encrypted `version` phase, whose plaintext payload is a UTF-8-encoded
|
||||
JSON-encoded dictionary of metadata. This allows the two Wormhole instances
|
||||
to signal their ability to do other things (like "dilate" the wormhole). The
|
||||
version data will also include an `app_versions` key which contains a
|
||||
dictionary of metadata provided by the application, allowing apps to perform
|
||||
similar negotiation.
|
||||
|
||||
At this stage, the client knows the supposed shared key, but has not yet seen
|
||||
evidence that the peer knows it too. When the first peer message arrives
|
||||
(i.e. the first message with a `.side` that does not equal our own), it will
|
||||
be decrypted: we use authenticated encryption (`nacl.SecretBox`), so if this
|
||||
decryption succeeds, then we're confident that *somebody* used the same
|
||||
wormhole code as us. This event pushes the client mood from "lonely" to
|
||||
"happy".
|
||||
|
||||
This might be triggered by the peer's `version` message, but if we had to
|
||||
re-establish the Rendezvous Server connection, we might get peer messages out
|
||||
of order and see some application-level message first.
|
||||
|
||||
When a `version` message is successfully decrypted, the application is
|
||||
signaled with `got_version`. When any application message is successfully
|
||||
decrypted, `received` is signaled. Application messages are delivered
|
||||
strictly in-order: if we see phases 3 then 2 then 1, all three will be
|
||||
delivered in sequence after phase 1 is received.
|
||||
|
||||
If any message cannot be successfully decrypted, the mood is set to "scary",
|
||||
and the wormhole is closed. All pending Deferreds will be errbacked with a
|
||||
`WrongPasswordError` (a subclass of `WormholeError`), the nameplate/mailbox
|
||||
will be released, and the WebSocket connection will be dropped. If the
|
||||
application calls `close()`, the resulting Deferred will not fire until
|
||||
deallocation has finished and the WebSocket is closed, and then it will fire
|
||||
with an errback.
|
||||
|
||||
Both `version` and all numeric (app-specific) phases are encrypted. The
|
||||
message body will be the hex-encoded output of a NaCl `SecretBox`, keyed by a
|
||||
phase+side -specific key (computed with HKDF-SHA256, using the shared PAKE
|
||||
key as the secret input, and `wormhole:phase:%s%s % (SHA256(side),
|
||||
SHA256(phase))` as the CTXinfo), with a random nonce.
|
||||
|
||||
|
|
@ -1,98 +0,0 @@
|
|||
digraph {
|
||||
/*rankdir=LR*/
|
||||
api_get_code [label="get_code" shape="hexagon" color="red"]
|
||||
api_input_code [label="input_code" shape="hexagon" color="red"]
|
||||
api_set_code [label="set_code" shape="hexagon" color="red"]
|
||||
verify [label="verify" shape="hexagon" color="red"]
|
||||
send [label="API\nsend" shape="hexagon" color="red"]
|
||||
get [label="API\nget" shape="hexagon" color="red"]
|
||||
close [label="API\nclose" shape="hexagon" color="red"]
|
||||
|
||||
event_connected [label="connected" shape="box"]
|
||||
event_learned_code [label="learned\ncode" shape="box"]
|
||||
event_learned_nameplate [label="learned\nnameplate" shape="box"]
|
||||
event_received_mailbox [label="received\nmailbox" shape="box"]
|
||||
event_opened_mailbox [label="opened\nmailbox" shape="box"]
|
||||
event_built_msg1 [label="built\nmsg1" shape="box"]
|
||||
event_mailbox_used [label="mailbox\nused" shape="box"]
|
||||
event_learned_PAKE [label="learned\nmsg2" shape="box"]
|
||||
event_established_key [label="established\nkey" shape="box"]
|
||||
event_computed_verifier [label="computed\nverifier" shape="box"]
|
||||
event_received_confirm [label="received\nconfirm" shape="box"]
|
||||
event_received_message [label="received\nmessage" shape="box"]
|
||||
event_received_released [label="ack\nreleased" shape="box"]
|
||||
event_received_closed [label="ack\nclosed" shape="box"]
|
||||
|
||||
event_connected -> api_get_code
|
||||
event_connected -> api_input_code
|
||||
api_get_code -> event_learned_code
|
||||
api_input_code -> event_learned_code
|
||||
api_set_code -> event_learned_code
|
||||
|
||||
|
||||
maybe_build_msg1 [label="build\nmsg1"]
|
||||
maybe_claim_nameplate [label="claim\nnameplate"]
|
||||
maybe_send_pake [label="send\npake"]
|
||||
maybe_send_phase_messages [label="send\nphase\nmessages"]
|
||||
|
||||
event_connected -> maybe_claim_nameplate
|
||||
event_connected -> maybe_send_pake
|
||||
|
||||
event_built_msg1 -> maybe_send_pake
|
||||
|
||||
event_learned_code -> maybe_build_msg1
|
||||
event_learned_code -> event_learned_nameplate
|
||||
|
||||
maybe_build_msg1 -> event_built_msg1
|
||||
event_learned_nameplate -> maybe_claim_nameplate
|
||||
maybe_claim_nameplate -> event_received_mailbox [style="dashed"]
|
||||
|
||||
event_received_mailbox -> event_opened_mailbox
|
||||
maybe_claim_nameplate -> event_learned_PAKE [style="dashed"]
|
||||
maybe_claim_nameplate -> event_received_confirm [style="dashed"]
|
||||
|
||||
event_opened_mailbox -> event_learned_PAKE [style="dashed"]
|
||||
event_learned_PAKE -> event_mailbox_used [style="dashed"]
|
||||
event_learned_PAKE -> event_received_confirm [style="dashed"]
|
||||
event_received_confirm -> event_received_message [style="dashed"]
|
||||
|
||||
send -> maybe_send_phase_messages
|
||||
release_nameplate [label="release\nnameplate"]
|
||||
event_mailbox_used -> release_nameplate
|
||||
event_opened_mailbox -> maybe_send_pake
|
||||
event_opened_mailbox -> maybe_send_phase_messages
|
||||
|
||||
event_learned_PAKE -> event_established_key
|
||||
event_established_key -> event_computed_verifier
|
||||
event_established_key -> check_confirmation
|
||||
event_established_key -> maybe_send_phase_messages
|
||||
|
||||
check_confirmation [label="check\nconfirmation"]
|
||||
event_received_confirm -> check_confirmation
|
||||
|
||||
notify_verifier [label="notify\nverifier"]
|
||||
check_confirmation -> notify_verifier
|
||||
verify -> notify_verifier
|
||||
event_computed_verifier -> notify_verifier
|
||||
|
||||
check_confirmation -> error
|
||||
event_received_message -> error
|
||||
event_received_message -> get
|
||||
event_established_key -> get
|
||||
|
||||
close -> close_mailbox
|
||||
close -> release_nameplate
|
||||
error [label="signal\nerror"]
|
||||
error -> close_mailbox
|
||||
error -> release_nameplate
|
||||
|
||||
release_nameplate -> event_received_released [style="dashed"]
|
||||
close_mailbox [label="close\nmailbox"]
|
||||
close_mailbox -> event_received_closed [style="dashed"]
|
||||
|
||||
maybe_close_websocket [label="close\nwebsocket"]
|
||||
event_received_released -> maybe_close_websocket
|
||||
event_received_closed -> maybe_close_websocket
|
||||
maybe_close_websocket -> event_websocket_closed [style="dashed"]
|
||||
event_websocket_closed [label="websocket\nclosed"]
|
||||
}
|
191
docs/file-transfer-protocol.md
Normal file
191
docs/file-transfer-protocol.md
Normal file
|
@ -0,0 +1,191 @@
|
|||
# File-Transfer Protocol
|
||||
|
||||
The `bin/wormhole` tool uses a Wormhole to establish a connection, then
|
||||
speaks a file-transfer -specific protocol over that Wormhole to decide how to
|
||||
transfer the data. This application-layer protocol is described here.
|
||||
|
||||
All application-level messages are dictionaries, which are JSON-encoded and
|
||||
and UTF-8 encoded before being handed to `wormhole.send` (which then encrypts
|
||||
them before sending through the rendezvous server to the peer).
|
||||
|
||||
## Sender
|
||||
|
||||
`wormhole send` has two main modes: file/directory (which requires a
|
||||
non-wormhole Transit connection), or text (which does not).
|
||||
|
||||
If the sender is doing files or directories, its first message contains just
|
||||
a `transit` key, whose value is a dictionary with `abilities-v1` and
|
||||
`hints-v1` keys. These are given to the Transit object, described below.
|
||||
|
||||
Then (for both files/directories and text) it sends a message with an `offer`
|
||||
key. The offer contains a single key, exactly one of (`message`, `file`, or
|
||||
`directory`). For `message`, the value is the message being sent. For `file`
|
||||
and `directory`, it contains a dictionary with additional information:
|
||||
|
||||
* `message`: the text message, for text-mode
|
||||
* `file`: for file-mode, a dict with `filename` and `filesize`
|
||||
* `directory`: for directory-mode, a dict with:
|
||||
* `mode`: the compression mode, currently always `zipfile/deflated`
|
||||
* `dirname`
|
||||
* `zipsize`: integer, size of the transmitted data in bytes
|
||||
* `numbytes`: integer, estimated total size of the uncompressed directory
|
||||
* `numfiles`: integer, number of files+directories being sent
|
||||
|
||||
The sender runs a loop where it waits for similar dictionary-shaped messages
|
||||
from the recipient, and processes them. It reacts to the following keys:
|
||||
|
||||
* `error`: use the value to throw a TransferError and terminates
|
||||
* `transit`: use the value to build the Transit instance
|
||||
* `answer`:
|
||||
* if `message_ack: ok` is in the value (we're in text-mode), then exit with success
|
||||
* if `file_ack: ok` in the value (and we're in file/directory mode), then
|
||||
wait for Transit to connect, then send the file through Transit, then wait
|
||||
for an ack (via Transit), then exit
|
||||
|
||||
The sender can handle all of these keys in the same message, or spaced out
|
||||
over multiple ones. It will ignore any keys it doesn't recognize, and will
|
||||
completely ignore messages that don't contain any recognized key. The only
|
||||
constraint is that the message containing `message_ack` or `file_ack` is the
|
||||
last one: it will stop looking for wormhole messages at that point.
|
||||
|
||||
## Recipient
|
||||
|
||||
`wormhole receive` is used for both file/directory-mode and text-mode: it
|
||||
learns which is being used from the `offer` message.
|
||||
|
||||
The recipient enters a loop where it processes the following keys from each
|
||||
received message:
|
||||
|
||||
* `error`: if present in any message, the recipient raises TransferError
|
||||
(with the value) and exits immediately (before processing any other keys)
|
||||
* `transit`: the value is used to build the Transit instance
|
||||
* `offer`: parse the offer:
|
||||
* `message`: accept the message and terminate
|
||||
* `file`: connect a Transit instance, wait for it to deliver the indicated
|
||||
number of bytes, then write them to the target filename
|
||||
* `directory`: as with `file`, but unzip the bytes into the target directory
|
||||
|
||||
## Transit
|
||||
|
||||
The Wormhole API does not currently provide for large-volume data transfer
|
||||
(this feature will be added to a future version, under the name "Dilated
|
||||
Wormhole"). For now, bulk data is sent through a "Transit" object, which does
|
||||
not use the Rendezvous Server. Instead, it tries to establish a direct TCP
|
||||
connection from sender to recipient (or vice versa). If that fails, both
|
||||
sides connect to a "Transit Relay", a very simple Server that just glues two
|
||||
TCP sockets together when asked.
|
||||
|
||||
The Transit object is created with a key (the same key on each side), and all
|
||||
data sent through it will be encrypted with a derivation of that key. The
|
||||
transit key is also used to derive handshake messages which are used to make
|
||||
sure we're talking to the right peer, and to help the Transit Relay match up
|
||||
the two client connections. Unlike Wormhole objects (which are symmetric),
|
||||
Transit objects come in pairs: one side is the Sender, and the other is the
|
||||
Receiver.
|
||||
|
||||
Like Wormhole, Transit provides an encrypted record pipe. If you call
|
||||
`.send()` with 40 bytes, the other end will see a `.gotData()` with exactly
|
||||
40 bytes: no splitting, merging, dropping, or re-ordering. The Transit object
|
||||
also functions as a twisted Producer/Consumer, so it can be connected
|
||||
directly to file-readers and writers, and does flow-control properly.
|
||||
|
||||
Most of the complexity of the Transit object has to do with negotiating and
|
||||
scheduling likely targets for the TCP connection.
|
||||
|
||||
Each Transit object has a set of "abilities". These are outbound connection
|
||||
mechanisms that the client is capable of using. The basic CLI tool (running
|
||||
on a normal computer) has two abilities: `direct-tcp-v1` and `relay-v1`.
|
||||
|
||||
* `direct-tcp-v1` indicates that it can make outbound TCP connections to a
|
||||
requested host and port number. "v1" means that the first thing sent over
|
||||
these connections is a specific derived handshake message, e.g. `transit
|
||||
sender HEXHEX ready\n\n`.
|
||||
* `relay-v1` indicates it can connect to the Transit Relay and speak the
|
||||
matching protocol (in which the first message is `please relay HEXHEX for
|
||||
side HEX\n`, and the relay might eventually say `ok\n`).
|
||||
|
||||
Future implementations may have additional abilities, such as connecting
|
||||
directly to Tor onion services, I2P services, WebSockets, WebRTC, or other
|
||||
connection technologies. Implementations on some platforms (such as web
|
||||
browsers) may lack `direct-tcp-v1` or `relay-v1`.
|
||||
|
||||
While it isn't strictly necessary for both sides to emit what they're capable
|
||||
of using, it does help performance: a Tor Onion-service -capable receiver
|
||||
shouldn't spend the time and energy to set up an onion service if the sender
|
||||
can't use it.
|
||||
|
||||
After learning the abilities of its peer, the Transit object can create a
|
||||
list of "hints", which are endpoints that the peer should try to connect to.
|
||||
Each hint will fall under one of the abilities that the peer indicated it
|
||||
could use. Hints have types like `direct-tcp-v1`, `tor-tcp-v1`, and
|
||||
`relay-v1`. Hints are encoded into dictionaries (with a mandatory `type` key,
|
||||
and other keys as necessary):
|
||||
|
||||
* `direct-tcp-v1` {hostname:, port:, priority:?}
|
||||
* `tor-tcp-v1` {hostname:, port:, priority:?}
|
||||
* `relay-v1` {hints: [{hostname:, port:, priority:?}, ..]}
|
||||
|
||||
For example, if our peer can use `direct-tcp-v1`, then our Transit object
|
||||
will deduce our local IP addresses (unless forbidden, i.e. we're using Tor),
|
||||
listen on a TCP port, then send a list of `direct-tcp-v1` hints pointing at
|
||||
all of them. If our peer can use `relay-v1`, then we'll connect to our relay
|
||||
server and give the peer a hint to the same.
|
||||
|
||||
`tor-tcp-v1` hints indicate an Onion service, which cannot be reached without
|
||||
Tor. `direct-tcp-v1` hints can be reached with direct TCP connections (unless
|
||||
forbidden) or by proxying through Tor. Onion services take about 30 seconds
|
||||
to spin up, but bypass NAT, allowing two clients behind NAT boxes to connect
|
||||
without a transit relay (really, the entire Tor network is acting as a
|
||||
relay).
|
||||
|
||||
The file-transfer application uses `transit` messages to convey these
|
||||
abilities and hints from one Transit object to the other. After updating the
|
||||
Transit objects, it then asks the Transit object to connect, whereupon
|
||||
Transit will try to connect to all the hints that it can, and will use the
|
||||
first one that succeeds.
|
||||
|
||||
The file-transfer application, when actually sending file/directory data,
|
||||
will close the Wormhole as soon as it has enough information to begin opening
|
||||
the Transit connection. The final ack of the received data is sent through
|
||||
the Transit object, as a UTF-8-encoded JSON-encoded dictionary with `ack: ok`
|
||||
and `sha256: HEXHEX` containing the hash of the received data.
|
||||
|
||||
|
||||
## Future Extensions
|
||||
|
||||
Transit will be extended to provide other connection techniques:
|
||||
|
||||
* WebSocket: usable by web browsers, not too hard to use by normal computers,
|
||||
requires direct (or relayed) TCP connection
|
||||
* WebRTC: usable by web browsers, hard-but-technically-possible to use by
|
||||
normal computers, provides NAT hole-punching for "free"
|
||||
* (web browsers cannot make direct TCP connections, so interop between
|
||||
browsers and CLI clients will either require adding WebSocket to CLI, or a
|
||||
relay that is capable of speaking/bridging both)
|
||||
* I2P: like Tor, but not capable of proxying to normal TCP hints.
|
||||
* ICE-mediated STUN/STUNT: NAT hole-punching, assisted somewhat by a server
|
||||
that can tell you your external IP address and port. Maybe implemented as a
|
||||
uTP stream (which is UDP based, and thus easier to get through NAT).
|
||||
|
||||
The file-transfer protocol will be extended too:
|
||||
|
||||
* "command mode": establish the connection, *then* figure out what we want to
|
||||
use it for, allowing multiple files to be exchanged, in either direction.
|
||||
This is to support a GUI that lets you open the wormhole, then drop files
|
||||
into it on either end.
|
||||
* some Transit messages being sent early, so ports and Onion services can be
|
||||
spun up earier, to reduce overall waiting time
|
||||
* transit messages being sent in multiple phases: maybe the transit
|
||||
connection can progress while waiting for the user to confirm the transfer
|
||||
|
||||
The hope is that by sending everything in dictionaries and multiple messages,
|
||||
there will be enough wiggle room to make these extensions in a
|
||||
backwards-compatible way. For example, to add "command mode" while allowing
|
||||
the fancy new (as yet unwritten) GUI client to interoperate with
|
||||
old-fashioned one-file-only CLI clients, we need the GUI tool to send an "I'm
|
||||
capable of command mode" in the VERSION message, and look for it in the
|
||||
received VERSION. If it isn't present, it will either expect to see an offer
|
||||
(if the other side is sending), or nothing (if it is waiting to receive), and
|
||||
can explain the situation to the user accordingly. It might show a locked set
|
||||
of bars over the wormhole graphic to mean "cannot send", or a "waiting to
|
||||
send them a file" overlay for send-only.
|
56
docs/introduction.md
Normal file
56
docs/introduction.md
Normal file
|
@ -0,0 +1,56 @@
|
|||
# Magic-Wormhole
|
||||
|
||||
The magic-wormhole (Python) distribution provides several things: an
|
||||
executable tool ("bin/wormhole"), an importable library (`import wormhole`),
|
||||
the URL of a publically-available Rendezvous Server, and the definition of a
|
||||
protocol used by all three.
|
||||
|
||||
The executable tool provides basic sending and receiving of files,
|
||||
directories, and short text strings. These all use `wormhole send` and
|
||||
`wormhole receive` (which can be abbreviated as `wormhole tx` and `wormhole
|
||||
rx`). It also has a mode to facilitate the transfer of SSH keys. This tool,
|
||||
while useful on its own, is just one possible use of the protocol.
|
||||
|
||||
The `wormhole` library provides an API to establish a bidirectional ordered
|
||||
encrypted record pipe to another instance (where each record is an
|
||||
arbitrary-sized bytestring). This does not provide file-transfer directly:
|
||||
the "bin/wormhole" tool speaks a simple protocol through this record pipe to
|
||||
negotiate and perform the file transfer.
|
||||
|
||||
`wormhole/cli/public_relay.py` contains the URLs of a Rendezvous Server and a
|
||||
Transit Relay which I provide to support the file-transfer tools, which other
|
||||
developers should feel free to use for their applications as well. I cannot
|
||||
make any guarantees about performance or uptime for these servers: if you
|
||||
want to use Magic Wormhole in a production environment, please consider
|
||||
running a server on your own infrastructure (just run `wormhole-server start`
|
||||
and modify the URLs in your application to point at it).
|
||||
|
||||
## The Magic-Wormhole Protocol
|
||||
|
||||
There are several layers to the protocol.
|
||||
|
||||
At the bottom level, each client opens a WebSocket to the Rendezvous Server,
|
||||
sending JSON-based commands to the server, and receiving similarly-encoded
|
||||
messages. Some of these commands are addressed to the server itself, while
|
||||
others are instructions to queue a message to other clients, or are
|
||||
indications of messages coming from other clients. All these messages are
|
||||
described in "server-protocol.md".
|
||||
|
||||
These inter-client messages are used to convey the PAKE protocol exchange,
|
||||
then a "VERSION" message (which doubles to verify the session key), then some
|
||||
number of encrypted application-level data messages. "client-protocol.md"
|
||||
describes these wormhole-to-wormhole messages.
|
||||
|
||||
Each wormhole-using application is then free to interpret the data messages
|
||||
as it pleases. The file-transfer app sends an "offer" from the `wormhole
|
||||
send` side, to which the `wormhole receive` side sends a response, after
|
||||
which the Transit connection is negotiated (if necessary), and finally the
|
||||
data is sent through the Transit connection. "file-transfer-protocol.md"
|
||||
describes this application's use of the client messages.
|
||||
|
||||
## The `wormhole` API
|
||||
|
||||
Application use the `wormhole` library to establish wormhole connections and
|
||||
exchange data through them. Please see `api.md` for a complete description of
|
||||
this interface.
|
||||
|
148
docs/journal.md
Normal file
148
docs/journal.md
Normal file
|
@ -0,0 +1,148 @@
|
|||
# Journaled Mode
|
||||
|
||||
(note: this section is speculative, the code has not yet been written)
|
||||
|
||||
Magic-Wormhole supports applications which are written in a "journaled" or
|
||||
"checkpointed" style. These apps store their entire state in a well-defined
|
||||
checkpoint (perhaps in a database), and react to inbound events or messages
|
||||
by carefully moving from one state to another, then releasing any outbound
|
||||
messages. As a result, they can be terminated safely at any moment, without
|
||||
warning, and ensure that the externally-visible behavior is deterministic and
|
||||
independent of this stop/restart timing.
|
||||
|
||||
This is the style encouraged by the E event loop, the
|
||||
original [Waterken Server](http://waterken.sourceforge.net/), and the more
|
||||
modern [Ken Platform](http://web.eecs.umich.edu/~tpkelly/Ken/), all
|
||||
influencial in the object-capability security community.
|
||||
|
||||
## Requirements
|
||||
|
||||
Applications written in this style must follow some strict rules:
|
||||
|
||||
* all state goes into the checkpoint
|
||||
* the only way to affect the state is by processing an input message
|
||||
* event processing is deterministic (any non-determinism must be implemented
|
||||
as a message, e.g. from a clock service or a random-number generator)
|
||||
* apps must never forget a message for which they've accepted reponsibility
|
||||
|
||||
The main processing function takes the previous state checkpoint and a single
|
||||
input message, and produces a new state checkpoint and a set of output
|
||||
messages. For performance, the state might be kept in memory between events,
|
||||
but the behavior should be indistinguishable from that of a server which
|
||||
terminates completely between events.
|
||||
|
||||
In general, applications must tolerate duplicate inbound messages, and should
|
||||
re-send outbound messages until the recipient acknowledges them. Any outbound
|
||||
responses to an inbound message must be queued until the checkpoint is
|
||||
recorded. If outbound messages were delivered before the checkpointing, then
|
||||
a crash just after delivery would roll the process back to a state where it
|
||||
forgot about the inbound event, causing observably inconsistent behavior that
|
||||
depends upon whether the outbound message successfully escaped the dying
|
||||
process or not.
|
||||
|
||||
As a result, journaled-style applications use a very specific process when
|
||||
interacting with the outside world. Their event-processing function looks
|
||||
like:
|
||||
|
||||
* receive inbound event
|
||||
* (load state)
|
||||
* create queue for any outbound messages
|
||||
* process message (changing state and queuing outbound messages)
|
||||
* serialize state, record in checkpoint
|
||||
* deliver any queued outbound messages
|
||||
|
||||
In addition, the protocols used to exchange messages should include message
|
||||
IDs and acks. Part of the state vector will include a set of unacknowledged
|
||||
outbound messages. When a connection is established, all outbound messages
|
||||
should be re-sent, and messages are removed from the pending set when an
|
||||
inbound ack is received. The state must include a set of inbound message ids
|
||||
which have been processed already. All inbound messages receive an ack, but
|
||||
only new ones are processed. Connection establishment/loss is not strictly
|
||||
included in the journaled-app model (in Waterken/Ken, message delivery is
|
||||
provided by the platform, and apps do not know about connections), but
|
||||
general:
|
||||
|
||||
* "I want to have a connection" is stored in the state vector
|
||||
* "I am connected" is not
|
||||
* when a connection is established, code can run to deliver pending messages,
|
||||
and this does not qualify as an inbound event
|
||||
* inbound events can only happen when at least one connection is established
|
||||
* immediately after restarting from a checkpoint, no connections are
|
||||
established, but the app might initiate outbound connections, or prepare to
|
||||
accept inbound ones
|
||||
|
||||
## Wormhole Support
|
||||
|
||||
To support this mode, the Wormhole constructor accepts a `journal=` argument.
|
||||
If provided, it must be an object that implements the `wormhole.IJournal`
|
||||
interface, which consists of two methods:
|
||||
|
||||
* `j.queue_outbound(fn, *args, **kwargs)`: used to delay delivery of outbound
|
||||
messages until the checkpoint has been recorded
|
||||
* `j.process()`: a context manager which should be entered before processing
|
||||
inbound messages
|
||||
|
||||
`wormhole.Journal` is an implementation of this interface, which is
|
||||
constructed with a (synchronous) `save_checkpoint` function. Applications can
|
||||
use it, or bring their own.
|
||||
|
||||
The Wormhole object, when configured with a journal, will wrap all inbound
|
||||
WebSocket message processing with the `j.process()` context manager, and will
|
||||
deliver all outbound messages through `j.queue_outbound`. Applications using
|
||||
such a Wormhole must also use the same journal for their own (non-wormhole)
|
||||
events. It is important to coordinate multiple sources of events: e.g. a UI
|
||||
event may cause the application to call `w.send(data)`, and the outbound
|
||||
wormhole message should be checkpointed along with the app's state changes
|
||||
caused by the UI event. Using a shared journal for both wormhole- and
|
||||
non-wormhole- events provides this coordination.
|
||||
|
||||
The `save_checkpoint` function should serialize application state along with
|
||||
any Wormholes that are active. Wormhole state can be obtained by calling
|
||||
`w.serialize()`, which will return a dictionary (that can be
|
||||
JSON-serialized). At application startup (or checkpoint resumption),
|
||||
Wormholes can be regenerated with `wormhole.from_serialized()`. Note that
|
||||
only "delegated-mode" wormholes can be serialized: Deferreds are not amenable
|
||||
to usage beyond a single process lifetime.
|
||||
|
||||
For a functioning example of a journaled-mode application, see
|
||||
misc/demo-journal.py. The following snippet may help illustrate the concepts:
|
||||
|
||||
```python
|
||||
class App:
|
||||
@classmethod
|
||||
def new(klass):
|
||||
self = klass()
|
||||
self.state = {}
|
||||
self.j = wormhole.Journal(self.save_checkpoint)
|
||||
self.w = wormhole.create(.., delegate=self, journal=self.j)
|
||||
|
||||
@classmethod
|
||||
def from_serialized(klass):
|
||||
self = klass()
|
||||
self.j = wormhole.Journal(self.save_checkpoint)
|
||||
with open("state.json", "r") as f:
|
||||
data = json.load(f)
|
||||
self.state = data["state"]
|
||||
self.w = wormhole.from_serialized(data["wormhole"], reactor,
|
||||
delegate=self, journal=self.j)
|
||||
|
||||
def inbound_event(self, event):
|
||||
# non-wormhole events must be performed in the journal context
|
||||
with self.j.process():
|
||||
parse_event(event)
|
||||
change_state()
|
||||
self.j.queue_outbound(self.send, outbound_message)
|
||||
|
||||
def wormhole_received(self, data):
|
||||
# wormhole events are already performed in the journal context
|
||||
change_state()
|
||||
self.j.queue_outbound(self.send, stuff)
|
||||
|
||||
def send(self, outbound_message):
|
||||
actually_send_message(outbound_message)
|
||||
|
||||
def save_checkpoint(self):
|
||||
app_state = {"state": self.state, "wormhole": self.w.serialize()}
|
||||
with open("state.json", "w") as f:
|
||||
json.dump(app_state, f)
|
||||
```
|
237
docs/server-protocol.md
Normal file
237
docs/server-protocol.md
Normal file
|
@ -0,0 +1,237 @@
|
|||
# Rendezvous Server Protocol
|
||||
|
||||
## Concepts
|
||||
|
||||
The Rendezvous Server provides queued delivery of binary messages from one
|
||||
client to a second, and vice versa. Each message contains a "phase" (a
|
||||
string) and a body (bytestring). These messages are queued in a "Mailbox"
|
||||
until the other side connects and retrieves them, but are delivered
|
||||
immediately if both sides are connected to the server at the same time.
|
||||
|
||||
Mailboxes are identified by a large random string. "Nameplates", in contrast,
|
||||
have short numeric identities: in a wormhole code like "4-purple-sausages",
|
||||
the "4" is the nameplate.
|
||||
|
||||
Each client has a randomly-generated "side", a short hex string, used to
|
||||
differentiate between echoes of a client's own message, and real messages
|
||||
from the other client.
|
||||
|
||||
## Application IDs
|
||||
|
||||
The server isolates each application from the others. Each client provides an
|
||||
"App Id" when it first connects (via the "BIND" message), and all subsequent
|
||||
commands are scoped to this application. This means that nameplates
|
||||
(described below) and mailboxes can be re-used between different apps. The
|
||||
AppID is a unicode string. Both sides of the wormhole must use the same
|
||||
AppID, of course, or they'll never see each other. The server keeps track of
|
||||
which applications are in use for maintenance purposes.
|
||||
|
||||
Each application should use a unique AppID. Developers are encouraged to use
|
||||
"DNSNAME/APPNAME" to obtain a unique one: e.g. the `bin/wormhole`
|
||||
file-transfer tool uses `lothar.com/wormhole/text-or-file-xfer`.
|
||||
|
||||
## WebSocket Transport
|
||||
|
||||
At the lowest level, each client establishes (and maintains) a WebSocket
|
||||
connection to the Rendezvous Server. If the connection is lost (which could
|
||||
happen because the server was rebooted for maintenance, or because the
|
||||
client's network connection migrated from one network to another, or because
|
||||
the resident network gremlins decided to mess with you today), clients should
|
||||
reconnect after waiting a random (and exponentially-growing) delay. The
|
||||
Python implementation waits about 1 second after the first connection loss,
|
||||
growing by 50% each time, capped at 1 minute.
|
||||
|
||||
Each message to the server is a dictionary, with at least a `type` key, and
|
||||
other keys that depend upon the particular message type. Messages from server
|
||||
to client follow the same format.
|
||||
|
||||
`misc/dump-timing.py` is a debug tool which renders timing data gathered from
|
||||
the server and both clients, to identify protocol slowdowns and guide
|
||||
optimization efforts. To support this, the client/server messages include
|
||||
additional keys. Client->Server messages include a random `id` key, which is
|
||||
copied into the `ack` that is immediately sent back to the client for all
|
||||
commands (logged for the timing tool but otherwise ignored). Some
|
||||
client->server messages (`list`, `allocate`, `claim`, `release`, `close`,
|
||||
`ping`) provoke a direct response by the server: for these, `id` is copied
|
||||
into the response. This helps the tool correlate the command and response.
|
||||
All server->client messages have a `server_tx` timestamp (seconds since
|
||||
epoch, as a float), which records when the message left the server. Direct
|
||||
responses include a `server_rx` timestamp, to record when the client's
|
||||
command was received. The tool combines these with local timestamps (recorded
|
||||
by the client and not shared with the server) to build a full picture of
|
||||
network delays and round-trip times.
|
||||
|
||||
All messages are serialized as JSON, encoded to UTF-8, and the resulting
|
||||
bytes sent as a single "binary-mode" WebSocket payload.
|
||||
|
||||
Servers can signal `error` for any message type it does not recognize.
|
||||
Clients and Servers must ignore unrecognized keys in otherwise-recognized
|
||||
messages. Clients must ignore unrecognized message types from the Server.
|
||||
|
||||
## Connection-Specific (Client-to-Server) Messages
|
||||
|
||||
The first thing each client sends to the server, immediately after the
|
||||
WebSocket connection is established, is a `bind` message. This specifies the
|
||||
AppID and side (in keys `appid` and `side`, respectively) that all subsequent
|
||||
messages will be scoped to. While technically each message could be
|
||||
independent (with its own `appid` and `side`), I thought it would be less
|
||||
confusing to use exactly one WebSocket per logical wormhole connection.
|
||||
|
||||
The first thing the server sends to each client is the `welcome` message.
|
||||
This is intended to deliver important status information to the client that
|
||||
might influence its operation. The Python client currently reacts to the
|
||||
following keys (and ignores all others):
|
||||
|
||||
* `current_cli_version`: prompts the user to upgrade if the server's
|
||||
advertised version is greater than the client's version (as derived from
|
||||
the git tag)
|
||||
* `motd`: prints this message, if present; intended to inform users about
|
||||
performance problems, scheduled downtime, or to beg for donations to keep
|
||||
the server running
|
||||
* `error`: causes the client to print the message and then terminate. If a
|
||||
future version of the protocol requires a rate-limiting CAPTCHA ticket or
|
||||
other authorization record, the server can send `error` (explaining the
|
||||
requirement) if it does not see this ticket arrive before the `bind`.
|
||||
|
||||
A `ping` will provoke a `pong`: these are only used by unit tests for
|
||||
synchronization purposes (to detect when a batch of messages have been fully
|
||||
processed by the server). NAT-binding refresh messages are handled by the
|
||||
WebSocket layer (by asking Autobahn to send a keepalive messages every 60
|
||||
seconds), and do not use `ping`.
|
||||
|
||||
If any client->server command is invalid (e.g. it lacks a necessary key, or
|
||||
was sent in the wrong order), an `error` response will be sent, This response
|
||||
will include the error string in the `error` key, and a full copy of the
|
||||
original message dictionary in `orig`.
|
||||
|
||||
## Nameplates
|
||||
|
||||
Wormhole codes look like `4-purple-sausages`, consisting of a number followed
|
||||
by some random words. This number is called a "Nameplate".
|
||||
|
||||
On the Rendezvous Server, the Nameplate contains a pointer to a Mailbox.
|
||||
Clients can "claim" a nameplate, and then later "release" it. Each claim is
|
||||
for a specific side (so one client claiming the same nameplate multiple times
|
||||
only counts as one claim). Nameplates are deleted once the last client has
|
||||
released it, or after some period of inactivity.
|
||||
|
||||
Clients can either make up nameplates themselves, or (more commonly) ask the
|
||||
server to allocate one for them. Allocating a nameplate automatically claims
|
||||
it (to avoid a race condition), but for simplicity, clients send a claim for
|
||||
all nameplates, even ones which they've allocated themselves.
|
||||
|
||||
Nameplates (on the server) must live until the second client has learned
|
||||
about the associated mailbox, after which point they can be reused by other
|
||||
clients. So if two clients connect quickly, but then maintain a long-lived
|
||||
wormhole connection, the do not need to consume the limited space of short
|
||||
nameplates for that whole time.
|
||||
|
||||
The `allocate` command allocates a nameplate (the server returns one that is
|
||||
as short as possible), and the `allocated` response provides the answer.
|
||||
Clients can also send a `list` command to get back a `nameplates` response
|
||||
with all allocated nameplates for the bound AppID: this helps the code-input
|
||||
tab-completion feature know which prefixes to offer. The `nameplates`
|
||||
response returns a list of dictionaries, one per claimed nameplate, with at
|
||||
least an `id` key in each one (with the nameplate string). Future versions
|
||||
may record additional attributes in the nameplate records, specifically a
|
||||
wordlist identifier and a code length (again to help with code-completion on
|
||||
the receiver).
|
||||
|
||||
## Mailboxes
|
||||
|
||||
The server provides a single "Mailbox" to each pair of connecting Wormhole
|
||||
clients. This holds an unordered set of messages, delivered immediately to
|
||||
connected clients, and queued for delivery to clients which connect later.
|
||||
Messages from both clients are merged together: clients use the included
|
||||
`side` identifier to distinguish echoes of their own messages from those
|
||||
coming from the other client.
|
||||
|
||||
Each mailbox is "opened" by some number of clients at a time, until all
|
||||
clients have closed it. Mailboxes are kept alive by either an open client, or
|
||||
a Nameplate which points to the mailbox (so when a Nameplate is deleted from
|
||||
inactivity, the corresponding Mailbox will be too).
|
||||
|
||||
The `open` command both marks the mailbox as being opened by the bound side,
|
||||
and also adds the WebSocket as subscribed to that mailbox, so new messages
|
||||
are delivered immediately to the connected client. There is no explicit ack
|
||||
to the `open` command, but since all clients add a message to the mailbox as
|
||||
soon as they connect, there will always be a `message` reponse shortly after
|
||||
the `open` goes through. The `close` command provokes a `closed` response.
|
||||
|
||||
The `close` command accepts an optional "mood" string: this allows clients to
|
||||
tell the server (in general terms) about their experiences with the wormhole
|
||||
interaction. The server records the mood in its "usage" record, so the server
|
||||
operator can get a sense of how many connections are succeeding and failing.
|
||||
The moods currently recognized by the Rendezvous Server are:
|
||||
|
||||
* `happy` (default): the PAKE key-establishment worked, and the client saw at
|
||||
least one valid encrypted message from its peer
|
||||
* `lonely`: the client gave up without hearing anything from its peer
|
||||
* `scary`: the client saw an invalid encrypted message from its peer,
|
||||
indicating that either the wormhole code was typed in wrong, or an attacker
|
||||
tried (and failed) to guess the code
|
||||
* `errory`: the client encountered some other error: protocol problem or
|
||||
internal error
|
||||
|
||||
The server will also record `pruney` if it deleted the mailbox due to
|
||||
inactivity, or `crowded` if more than two sides tried to access the mailbox.
|
||||
|
||||
When clients use the `add` command to add a client-to-client message, they
|
||||
will put the body (a bytestring) into the command as a hex-encoded string in
|
||||
the `body` key. They will also put the message's "phase", as a string, into
|
||||
the `phase` key. See client-protocol.md for details about how different
|
||||
phases are used.
|
||||
|
||||
When a client sends `open`, it will get back a `message` response for every
|
||||
message in the mailbox. It will also get a real-time `message` for every
|
||||
`add` performed by clients later. These `message` responses include "side"
|
||||
and "phase" from the sending client, and "body" (as a hex string, encoding
|
||||
the binary message body). The decoded "body" will either by a random-looking
|
||||
cryptographic value (for the PAKE message), or a random-looking encrypted
|
||||
blob (for the VERSION message, as well as all application-provided payloads).
|
||||
The `message` response will also include `id`, copied from the `id` of the
|
||||
`add` message (and used only by the timing-diagram tool).
|
||||
|
||||
The Rendezvous Server does not de-duplicate messages, nor does it retain
|
||||
ordering: clients must do both if they need to.
|
||||
|
||||
## All Message Types
|
||||
|
||||
This lists all message types, along with the type-specific keys for each (if
|
||||
any), and which ones provoke direct responses:
|
||||
|
||||
* S->C welcome {welcome:}
|
||||
* (C->S) bind {appid:, side:}
|
||||
* (C->S) list {} -> nameplates
|
||||
* S->C nameplates {nameplates: [{id: str},..]}
|
||||
* (C->S) allocate {} -> allocated
|
||||
* S->C allocated {nameplate:}
|
||||
* (C->S) claim {nameplate:} -> claimed
|
||||
* S->C claimed {mailbox:}
|
||||
* (C->S) release {nameplate:?} -> released
|
||||
* S->C released
|
||||
* (C->S) open {mailbox:}
|
||||
* (C->S) add {phase: str, body: hex} -> message (to all connected clients)
|
||||
* S->C message {side:, phase:, body:, id:}
|
||||
* (C->S) close {mailbox:?, mood:?} -> closed
|
||||
* S->C closed
|
||||
* S->C ack
|
||||
* (C->S) ping {ping: int} -> ping
|
||||
* S->C pong {pong: int}
|
||||
* S->C error {error: str, orig:}
|
||||
|
||||
# Persistence
|
||||
|
||||
The server stores all messages in a database, so it should not lose any
|
||||
information when it is restarted. The server will not send a direct
|
||||
response until any side-effects (such as the message being added to the
|
||||
mailbox) have been safely committed to the database.
|
||||
|
||||
The client library knows how to resume the protocol after a reconnection
|
||||
event, assuming the client process itself continues to run.
|
||||
|
||||
Clients which terminate entirely between messages (e.g. a secure chat
|
||||
application, which requires multiple wormhole messages to exchange
|
||||
address-book entries, and which must function even if the two apps are never
|
||||
both running at the same time) can use "Journal Mode" to ensure forward
|
||||
progress is made: see "journal.md" for details.
|
9
docs/state-machines/Makefile
Normal file
9
docs/state-machines/Makefile
Normal file
|
@ -0,0 +1,9 @@
|
|||
|
||||
default: images
|
||||
|
||||
images: boss.png code.png key.png machines.png mailbox.png nameplate.png lister.png order.png receive.png send.png terminator.png
|
||||
|
||||
.PHONY: default images
|
||||
|
||||
%.png: %.dot
|
||||
dot -T png $< >$@
|
76
docs/state-machines/_connection.dot
Normal file
76
docs/state-machines/_connection.dot
Normal file
|
@ -0,0 +1,76 @@
|
|||
digraph {
|
||||
/* note: this is nominally what we want from the machine that
|
||||
establishes the WebSocket connection (and re-establishes it when it
|
||||
is lost). We aren't using this yet; for now we're relying upon
|
||||
twisted.application.internet.ClientService, which does reconnection
|
||||
and random exponential backoff.
|
||||
|
||||
The one thing it doesn't do is fail entirely when the first
|
||||
connection attempt fails, which I think would be good for usability.
|
||||
If the first attempt fails, it's probably because you don't have a
|
||||
network connection, or the hostname is wrong, or the service has
|
||||
been retired entirely. And retrying silently forever is not being
|
||||
honest with the user.
|
||||
|
||||
So I'm keeping this diagram around, as a reminder of how we'd like
|
||||
to modify ClientService. */
|
||||
|
||||
|
||||
/* ConnectionMachine */
|
||||
C_start [label="Connection\nMachine" style="dotted"]
|
||||
C_start -> C_Pc1 [label="CM_start()" color="orange" fontcolor="orange"]
|
||||
C_Pc1 [shape="box" label="ep.connect()" color="orange"]
|
||||
C_Pc1 -> C_Sc1 [color="orange"]
|
||||
C_Sc1 [label="connecting\n(1st time)" color="orange"]
|
||||
C_Sc1 -> C_P_reset [label="d.callback" color="orange" fontcolor="orange"]
|
||||
C_P_reset [shape="box" label="reset\ntimer" color="orange"]
|
||||
C_P_reset -> C_S_negotiating [color="orange"]
|
||||
C_Sc1 -> C_P_failed [label="d.errback" color="red"]
|
||||
C_Sc1 -> C_P_failed [label="p.onClose" color="red"]
|
||||
C_Sc1 -> C_P_cancel [label="C_stop()"]
|
||||
C_P_cancel [shape="box" label="d.cancel()"]
|
||||
C_P_cancel -> C_S_cancelling
|
||||
C_S_cancelling [label="cancelling"]
|
||||
C_S_cancelling -> C_P_stopped [label="d.errback"]
|
||||
|
||||
C_S_negotiating [label="negotiating" color="orange"]
|
||||
C_S_negotiating -> C_P_failed [label="p.onClose"]
|
||||
C_S_negotiating -> C_P_connected [label="p.onOpen" color="orange" fontcolor="orange"]
|
||||
C_S_negotiating -> C_P_drop2 [label="C_stop()"]
|
||||
C_P_drop2 [shape="box" label="p.dropConnection()"]
|
||||
C_P_drop2 -> C_S_disconnecting
|
||||
C_P_connected [shape="box" label="tx bind\nM_connected()" color="orange"]
|
||||
C_P_connected -> C_S_open [color="orange"]
|
||||
|
||||
C_S_open [label="open" color="green"]
|
||||
C_S_open -> C_P_lost [label="p.onClose" color="blue" fontcolor="blue"]
|
||||
C_S_open -> C_P_drop [label="C_stop()" color="orange" fontcolor="orange"]
|
||||
C_P_drop [shape="box" label="p.dropConnection()\nM_lost()" color="orange"]
|
||||
C_P_drop -> C_S_disconnecting [color="orange"]
|
||||
C_S_disconnecting [label="disconnecting" color="orange"]
|
||||
C_S_disconnecting -> C_P_stopped [label="p.onClose" color="orange" fontcolor="orange"]
|
||||
|
||||
C_P_lost [shape="box" label="M_lost()" color="blue"]
|
||||
C_P_lost -> C_P_wait [color="blue"]
|
||||
C_P_wait [shape="box" label="start timer" color="blue"]
|
||||
C_P_wait -> C_S_waiting [color="blue"]
|
||||
C_S_waiting [label="waiting" color="blue"]
|
||||
C_S_waiting -> C_Pc2 [label="expire" color="blue" fontcolor="blue"]
|
||||
C_S_waiting -> C_P_stop_timer [label="C_stop()"]
|
||||
C_P_stop_timer [shape="box" label="timer.cancel()"]
|
||||
C_P_stop_timer -> C_P_stopped
|
||||
C_Pc2 [shape="box" label="ep.connect()" color="blue"]
|
||||
C_Pc2 -> C_Sc2 [color="blue"]
|
||||
C_Sc2 [label="reconnecting" color="blue"]
|
||||
C_Sc2 -> C_P_reset [label="d.callback" color="blue" fontcolor="blue"]
|
||||
C_Sc2 -> C_P_wait [label="d.errback"]
|
||||
C_Sc2 -> C_P_cancel [label="C_stop()"]
|
||||
|
||||
C_P_stopped [shape="box" label="MC_stopped()" color="orange"]
|
||||
C_P_stopped -> C_S_stopped [color="orange"]
|
||||
C_S_stopped [label="stopped" color="orange"]
|
||||
|
||||
C_P_failed [shape="box" label="notify_fail" color="red"]
|
||||
C_P_failed -> C_S_failed
|
||||
C_S_failed [label="failed" color="red"]
|
||||
}
|
29
docs/state-machines/allocator.dot
Normal file
29
docs/state-machines/allocator.dot
Normal file
|
@ -0,0 +1,29 @@
|
|||
digraph {
|
||||
|
||||
start [label="A:\nNameplate\nAllocation" style="dotted"]
|
||||
{rank=same; start S0A S0B}
|
||||
start -> S0A [style="invis"]
|
||||
S0A [label="S0A:\nidle\ndisconnected" color="orange"]
|
||||
S0A -> S0B [label="connected"]
|
||||
S0B -> S0A [label="lost"]
|
||||
S0B [label="S0B:\nidle\nconnected"]
|
||||
S0A -> S1A [label="allocate(length, wordlist)" color="orange"]
|
||||
S0B -> P_allocate [label="allocate(length, wordlist)"]
|
||||
P_allocate [shape="box" label="RC.tx_allocate" color="orange"]
|
||||
P_allocate -> S1B [color="orange"]
|
||||
{rank=same; S1A P_allocate S1B}
|
||||
S0B -> S1B [style="invis"]
|
||||
S1B [label="S1B:\nallocating\nconnected" color="orange"]
|
||||
S1B -> foo [label="lost"]
|
||||
foo [style="dotted" label=""]
|
||||
foo -> S1A
|
||||
S1A [label="S1A:\nallocating\ndisconnected" color="orange"]
|
||||
S1A -> P_allocate [label="connected" color="orange"]
|
||||
|
||||
S1B -> P_allocated [label="rx_allocated" color="orange"]
|
||||
P_allocated [shape="box" label="choose words\nC.allocated(nameplate,code)" color="orange"]
|
||||
P_allocated -> S2 [color="orange"]
|
||||
|
||||
S2 [label="S2:\ndone" color="orange"]
|
||||
|
||||
}
|
80
docs/state-machines/boss.dot
Normal file
80
docs/state-machines/boss.dot
Normal file
|
@ -0,0 +1,80 @@
|
|||
digraph {
|
||||
|
||||
/* could shave a RTT by committing to the nameplate early, before
|
||||
finishing the rest of the code input. While the user is still
|
||||
typing/completing the code, we claim the nameplate, open the mailbox,
|
||||
and retrieve the peer's PAKE message. Then as soon as the user
|
||||
finishes entering the code, we build our own PAKE message, send PAKE,
|
||||
compute the key, send VERSION. Starting from the Return, this saves
|
||||
two round trips. OTOH it adds consequences to hitting Tab. */
|
||||
|
||||
start [label="Boss\n(manager)" style="dotted"]
|
||||
|
||||
{rank=same; P0_code S0}
|
||||
P0_code [shape="box" style="dashed"
|
||||
label="C.input_code\n or C.allocate_code\n or C.set_code"]
|
||||
P0_code -> S0
|
||||
S0 [label="S0: empty"]
|
||||
S0 -> P0_build [label="got_code"]
|
||||
|
||||
S0 -> P_close_error [label="rx_error"]
|
||||
P_close_error [shape="box" label="T.close(errory)"]
|
||||
P_close_error -> S_closing
|
||||
S0 -> P_close_lonely [label="close"]
|
||||
|
||||
S0 -> P_close_unwelcome [label="rx_unwelcome"]
|
||||
P_close_unwelcome [shape="box" label="T.close(unwelcome)"]
|
||||
P_close_unwelcome -> S_closing
|
||||
|
||||
P0_build [shape="box" label="W.got_code"]
|
||||
P0_build -> S1
|
||||
S1 [label="S1: lonely" color="orange"]
|
||||
|
||||
S1 -> S2 [label="happy"]
|
||||
|
||||
S1 -> P_close_error [label="rx_error"]
|
||||
S1 -> P_close_scary [label="scared" color="red"]
|
||||
S1 -> P_close_unwelcome [label="rx_unwelcome"]
|
||||
S1 -> P_close_lonely [label="close"]
|
||||
P_close_lonely [shape="box" label="T.close(lonely)"]
|
||||
P_close_lonely -> S_closing
|
||||
|
||||
P_close_scary [shape="box" label="T.close(scary)" color="red"]
|
||||
P_close_scary -> S_closing [color="red"]
|
||||
|
||||
S2 [label="S2: happy" color="green"]
|
||||
S2 -> P2_close [label="close"]
|
||||
P2_close [shape="box" label="T.close(happy)"]
|
||||
P2_close -> S_closing
|
||||
|
||||
S2 -> P2_got_phase [label="got_phase"]
|
||||
P2_got_phase [shape="box" label="W.received"]
|
||||
P2_got_phase -> S2
|
||||
|
||||
S2 -> P2_got_version [label="got_version"]
|
||||
P2_got_version [shape="box" label="W.got_version"]
|
||||
P2_got_version -> S2
|
||||
|
||||
S2 -> P_close_error [label="rx_error"]
|
||||
S2 -> P_close_scary [label="scared" color="red"]
|
||||
S2 -> P_close_unwelcome [label="rx_unwelcome"]
|
||||
|
||||
S_closing [label="closing"]
|
||||
S_closing -> P_closed [label="closed\nerror"]
|
||||
S_closing -> S_closing [label="got_version\ngot_phase\nhappy\nscared\nclose"]
|
||||
|
||||
P_closed [shape="box" label="W.closed(reason)"]
|
||||
P_closed -> S_closed
|
||||
S_closed [label="closed"]
|
||||
|
||||
S0 -> P_closed [label="error"]
|
||||
S1 -> P_closed [label="error"]
|
||||
S2 -> P_closed [label="error"]
|
||||
|
||||
{rank=same; Other S_closed}
|
||||
Other [shape="box" style="dashed"
|
||||
label="rx_welcome -> process (maybe rx_unwelcome)\nsend -> S.send\ngot_message -> got_version or got_phase\ngot_key -> W.got_key\ngot_verifier -> W.got_verifier\nallocate_code -> C.allocate_code\ninput_code -> C.input_code\nset_code -> C.set_code"
|
||||
]
|
||||
|
||||
|
||||
}
|
34
docs/state-machines/code.dot
Normal file
34
docs/state-machines/code.dot
Normal file
|
@ -0,0 +1,34 @@
|
|||
digraph {
|
||||
|
||||
start [label="C:\nCode\n(management)" style="dotted"]
|
||||
{rank=same; start S0}
|
||||
start -> S0 [style="invis"]
|
||||
S0 [label="S0:\nidle"]
|
||||
S0 -> P0_got_code [label="set_code\n(code)"]
|
||||
P0_got_code [shape="box" label="N.set_nameplate"]
|
||||
P0_got_code -> P_done
|
||||
P_done [shape="box" label="K.got_code\nB.got_code"]
|
||||
P_done -> S4
|
||||
S4 [label="S4: known" color="green"]
|
||||
|
||||
{rank=same; S1_inputting_nameplate S3_allocating}
|
||||
{rank=same; P0_got_code P1_set_nameplate P3_got_nameplate}
|
||||
S0 -> P_input [label="input_code"]
|
||||
P_input [shape="box" label="I.start\n(helper)"]
|
||||
P_input -> S1_inputting_nameplate
|
||||
S1_inputting_nameplate [label="S1:\ninputting\nnameplate"]
|
||||
S1_inputting_nameplate -> P1_set_nameplate [label="got_nameplate\n(nameplate)"]
|
||||
P1_set_nameplate [shape="box" label="N.set_nameplate"]
|
||||
P1_set_nameplate -> S2_inputting_words
|
||||
S2_inputting_words [label="S2:\ninputting\nwords"]
|
||||
S2_inputting_words -> P_done [label="finished_input\n(code)"]
|
||||
|
||||
S0 -> P_allocate [label="allocate_code\n(length,\nwordlist)"]
|
||||
P_allocate [shape="box" label="A.allocate\n(length, wordlist)"]
|
||||
P_allocate -> S3_allocating
|
||||
S3_allocating [label="S3:\nallocating"]
|
||||
S3_allocating -> P3_got_nameplate [label="allocated\n(nameplate,\ncode)"]
|
||||
P3_got_nameplate [shape="box" label="N.set_nameplate"]
|
||||
P3_got_nameplate -> P_done
|
||||
|
||||
}
|
43
docs/state-machines/input.dot
Normal file
43
docs/state-machines/input.dot
Normal file
|
@ -0,0 +1,43 @@
|
|||
digraph {
|
||||
|
||||
start [label="I:\nCode\nInput" style="dotted"]
|
||||
{rank=same; start S0}
|
||||
start -> S0 [style="invis"]
|
||||
S0 [label="S0:\nidle"]
|
||||
|
||||
S0 -> P0_list_nameplates [label="start"]
|
||||
P0_list_nameplates [shape="box" label="L.refresh"]
|
||||
P0_list_nameplates -> S1
|
||||
S1 [label="S1: typing\nnameplate" color="orange"]
|
||||
|
||||
{rank=same; foo P0_list_nameplates}
|
||||
S1 -> foo [label="refresh_nameplates" color="orange" fontcolor="orange"]
|
||||
foo [style="dashed" label=""]
|
||||
foo -> P0_list_nameplates
|
||||
|
||||
S1 -> P1_record [label="got_nameplates"]
|
||||
P1_record [shape="box" label="record\nnameplates"]
|
||||
P1_record -> S1
|
||||
|
||||
S1 -> P1_claim [label="choose_nameplate" color="orange" fontcolor="orange"]
|
||||
P1_claim [shape="box" label="stash nameplate\nC.got_nameplate"]
|
||||
P1_claim -> S2
|
||||
S2 [label="S2: typing\ncode\n(no wordlist)"]
|
||||
S2 -> S2 [label="got_nameplates"]
|
||||
S2 -> P2_stash_wordlist [label="got_wordlist"]
|
||||
P2_stash_wordlist [shape="box" label="stash wordlist"]
|
||||
P2_stash_wordlist -> S3
|
||||
S2 -> P_done [label="choose_words" color="orange" fontcolor="orange"]
|
||||
S3 [label="S3: typing\ncode\n(yes wordlist)"]
|
||||
S3 -> S3 [label="got_nameplates"]
|
||||
S3 -> P_done [label="choose_words" color="orange" fontcolor="orange"]
|
||||
P_done [shape="box" label="build code\nC.finished_input(code)"]
|
||||
P_done -> S4
|
||||
S4 [label="S4: done" color="green"]
|
||||
S4 -> S4 [label="got_nameplates\ngot_wordlist"]
|
||||
|
||||
other [shape="box" style="dotted" color="orange" fontcolor="orange"
|
||||
label="h.refresh_nameplates()\nh.get_nameplate_completions(prefix)\nh.choose_nameplate(nameplate)\nh.get_word_completions(prefix)\nh.choose_words(words)"
|
||||
]
|
||||
{rank=same; S4 other}
|
||||
}
|
63
docs/state-machines/key.dot
Normal file
63
docs/state-machines/key.dot
Normal file
|
@ -0,0 +1,63 @@
|
|||
digraph {
|
||||
|
||||
/* could shave a RTT by committing to the nameplate early, before
|
||||
finishing the rest of the code input. While the user is still
|
||||
typing/completing the code, we claim the nameplate, open the mailbox,
|
||||
and retrieve the peer's PAKE message. Then as soon as the user
|
||||
finishes entering the code, we build our own PAKE message, send PAKE,
|
||||
compute the key, send VERSION. Starting from the Return, this saves
|
||||
two round trips. OTOH it adds consequences to hitting Tab. */
|
||||
|
||||
start [label="Key\nMachine" style="dotted"]
|
||||
|
||||
/* two connected state machines: the first just puts the messages in
|
||||
the right order, the second handles PAKE */
|
||||
|
||||
{rank=same; SO_00 PO_got_code SO_10}
|
||||
{rank=same; SO_01 PO_got_both SO_11}
|
||||
SO_00 [label="S00"]
|
||||
SO_01 [label="S01: pake"]
|
||||
SO_10 [label="S10: code"]
|
||||
SO_11 [label="S11: both"]
|
||||
SO_00 -> SO_01 [label="got_pake\n(early)"]
|
||||
SO_00 -> PO_got_code [label="got_code"]
|
||||
PO_got_code [shape="box" label="K1.got_code"]
|
||||
PO_got_code -> SO_10
|
||||
SO_01 -> PO_got_both [label="got_code"]
|
||||
PO_got_both [shape="box" label="K1.got_code\nK1.got_pake"]
|
||||
PO_got_both -> SO_11
|
||||
SO_10 -> PO_got_pake [label="got_pake"]
|
||||
PO_got_pake [shape="box" label="K1.got_pake"]
|
||||
PO_got_pake -> SO_11
|
||||
|
||||
S0 [label="S0: know\nnothing"]
|
||||
S0 -> P0_build [label="got_code"]
|
||||
|
||||
P0_build [shape="box" label="build_pake\nM.add_message(pake)"]
|
||||
P0_build -> S1
|
||||
S1 [label="S1: know\ncode"]
|
||||
|
||||
/* the Mailbox will deliver each message exactly once, but doesn't
|
||||
guarantee ordering: if Alice starts the process, then disconnects,
|
||||
then Bob starts (reading PAKE, sending both his PAKE and his VERSION
|
||||
phase), then Alice will see both PAKE and VERSION on her next
|
||||
connect, and might get the VERSION first.
|
||||
|
||||
The Wormhole will queue inbound messages that it isn't ready for. The
|
||||
wormhole shim that lets applications do w.get(phase=) must do
|
||||
something similar, queueing inbound messages until it sees one for
|
||||
the phase it currently cares about.*/
|
||||
|
||||
S1 -> P_mood_scary [label="got_pake\npake bad"]
|
||||
P_mood_scary [shape="box" color="red" label="W.scared"]
|
||||
P_mood_scary -> S5 [color="red"]
|
||||
S5 [label="S5:\nscared" color="red"]
|
||||
S1 -> P1_compute [label="got_pake\npake good"]
|
||||
#S1 -> P_mood_lonely [label="close"]
|
||||
|
||||
P1_compute [label="compute_key\nM.add_message(version)\nB.got_key\nR.got_key" shape="box"]
|
||||
P1_compute -> S4
|
||||
|
||||
S4 [label="S4: know_key" color="green"]
|
||||
|
||||
}
|
39
docs/state-machines/lister.dot
Normal file
39
docs/state-machines/lister.dot
Normal file
|
@ -0,0 +1,39 @@
|
|||
digraph {
|
||||
{rank=same; title S0A S0B}
|
||||
title [label="(Nameplate)\nLister" style="dotted"]
|
||||
|
||||
S0A [label="S0A:\nnot wanting\nunconnected"]
|
||||
S0B [label="S0B:\nnot wanting\nconnected" color="orange"]
|
||||
|
||||
S0A -> S0B [label="connected"]
|
||||
S0B -> S0A [label="lost"]
|
||||
|
||||
S0A -> S1A [label="refresh"]
|
||||
S0B -> P_tx [label="refresh" color="orange" fontcolor="orange"]
|
||||
|
||||
S0A -> P_tx [style="invis"]
|
||||
|
||||
{rank=same; S1A P_tx S1B P_notify}
|
||||
|
||||
S1A [label="S1A:\nwant list\nunconnected"]
|
||||
S1B [label="S1B:\nwant list\nconnected" color="orange"]
|
||||
|
||||
S1A -> P_tx [label="connected"]
|
||||
P_tx [shape="box" label="RC.tx_list()" color="orange"]
|
||||
P_tx -> S1B [color="orange"]
|
||||
S1B -> S1A [label="lost"]
|
||||
|
||||
S1A -> foo [label="refresh"]
|
||||
foo [label="" style="dashed"]
|
||||
foo -> S1A
|
||||
|
||||
S1B -> foo2 [label="refresh"]
|
||||
foo2 [label="" style="dashed"]
|
||||
foo2 -> P_tx
|
||||
|
||||
S0B -> P_notify [label="rx_nameplates"]
|
||||
S1B -> P_notify [label="rx_nameplates" color="orange" fontcolor="orange"]
|
||||
P_notify [shape="box" label="I.got_nameplates()"]
|
||||
P_notify -> S0B
|
||||
|
||||
}
|
115
docs/state-machines/machines.dot
Normal file
115
docs/state-machines/machines.dot
Normal file
|
@ -0,0 +1,115 @@
|
|||
digraph {
|
||||
Wormhole [shape="oval" color="blue" fontcolor="blue"]
|
||||
Boss [shape="box" label="Boss\n(manager)"
|
||||
color="blue" fontcolor="blue"]
|
||||
Nameplate [label="Nameplate\n(claimer)"
|
||||
shape="box" color="blue" fontcolor="blue"]
|
||||
Mailbox [label="Mailbox\n(opener)"
|
||||
shape="box" color="blue" fontcolor="blue"]
|
||||
Connection [label="Rendezvous\nConnector"
|
||||
shape="oval" color="blue" fontcolor="blue"]
|
||||
#websocket [color="blue" fontcolor="blue"]
|
||||
Order [shape="box" label="Ordering" color="blue" fontcolor="blue"]
|
||||
Key [shape="box" label="Key" color="blue" fontcolor="blue"]
|
||||
Send [shape="box" label="Send" color="blue" fontcolor="blue"]
|
||||
Receive [shape="box" label="Receive" color="blue" fontcolor="blue"]
|
||||
Code [shape="box" label="Code" color="blue" fontcolor="blue"]
|
||||
Lister [shape="box" label="(nameplate)\nLister"
|
||||
color="blue" fontcolor="blue"]
|
||||
Allocator [shape="box" label="(nameplate)\nAllocator"
|
||||
color="blue" fontcolor="blue"]
|
||||
Input [shape="box" label="(interactive\ncode)\nInput"
|
||||
color="blue" fontcolor="blue"]
|
||||
Terminator [shape="box" color="blue" fontcolor="blue"]
|
||||
InputHelperAPI [shape="oval" label="input\nhelper\nAPI"
|
||||
color="blue" fontcolor="blue"]
|
||||
|
||||
#Connection -> websocket [color="blue"]
|
||||
#Connection -> Order [color="blue"]
|
||||
|
||||
Wormhole -> Boss [style="dashed"
|
||||
label="allocate_code\ninput_code\nset_code\nsend\nclose\n(once)"
|
||||
color="red" fontcolor="red"]
|
||||
#Wormhole -> Boss [color="blue"]
|
||||
Boss -> Wormhole [style="dashed" label="got_code\ngot_key\ngot_verifier\ngot_version\nreceived (seq)\nclosed\n(once)"]
|
||||
|
||||
#Boss -> Connection [color="blue"]
|
||||
Boss -> Connection [style="dashed" label="start"
|
||||
color="red" fontcolor="red"]
|
||||
Connection -> Boss [style="dashed" label="rx_welcome\nrx_error\nerror"]
|
||||
|
||||
Boss -> Send [style="dashed" color="red" fontcolor="red" label="send"]
|
||||
|
||||
#Boss -> Mailbox [color="blue"]
|
||||
Mailbox -> Order [style="dashed" label="got_message (once)"]
|
||||
Key -> Boss [style="dashed" label="got_key\nscared"]
|
||||
Order -> Key [style="dashed" label="got_pake"]
|
||||
Order -> Receive [style="dashed" label="got_message"]
|
||||
#Boss -> Key [color="blue"]
|
||||
Key -> Mailbox [style="dashed"
|
||||
label="add_message (pake)\nadd_message (version)"]
|
||||
Receive -> Send [style="dashed" label="got_verified_key"]
|
||||
Send -> Mailbox [style="dashed" color="red" fontcolor="red"
|
||||
label="add_message (phase)"]
|
||||
|
||||
Key -> Receive [style="dashed" label="got_key"]
|
||||
Receive -> Boss [style="dashed"
|
||||
label="happy\nscared\ngot_verifier\ngot_message"]
|
||||
Nameplate -> Connection [style="dashed"
|
||||
label="tx_claim\ntx_release"]
|
||||
Connection -> Nameplate [style="dashed"
|
||||
label="connected\nlost\nrx_claimed\nrx_released"]
|
||||
Mailbox -> Nameplate [style="dashed" label="release"]
|
||||
Nameplate -> Mailbox [style="dashed" label="got_mailbox"]
|
||||
Nameplate -> Input [style="dashed" label="got_wordlist"]
|
||||
|
||||
Mailbox -> Connection [style="dashed" color="red" fontcolor="red"
|
||||
label="tx_open\ntx_add\ntx_close"
|
||||
]
|
||||
Connection -> Mailbox [style="dashed"
|
||||
label="connected\nlost\nrx_message\nrx_closed\nstopped"]
|
||||
|
||||
Connection -> Lister [style="dashed"
|
||||
label="connected\nlost\nrx_nameplates"
|
||||
]
|
||||
Lister -> Connection [style="dashed"
|
||||
label="tx_list"
|
||||
]
|
||||
|
||||
#Boss -> Code [color="blue"]
|
||||
Connection -> Allocator [style="dashed"
|
||||
label="connected\nlost\nrx_allocated"]
|
||||
Allocator -> Connection [style="dashed" color="red" fontcolor="red"
|
||||
label="tx_allocate"
|
||||
]
|
||||
Lister -> Input [style="dashed"
|
||||
label="got_nameplates"
|
||||
]
|
||||
#Code -> Lister [color="blue"]
|
||||
Input -> Lister [style="dashed" color="red" fontcolor="red"
|
||||
label="refresh"
|
||||
]
|
||||
Boss -> Code [style="dashed" color="red" fontcolor="red"
|
||||
label="allocate_code\ninput_code\nset_code"]
|
||||
Code -> Boss [style="dashed" label="got_code"]
|
||||
Code -> Key [style="dashed" label="got_code"]
|
||||
Code -> Nameplate [style="dashed" label="set_nameplate"]
|
||||
|
||||
Code -> Input [style="dashed" color="red" fontcolor="red" label="start"]
|
||||
Input -> Code [style="dashed" label="got_nameplate\nfinished_input"]
|
||||
InputHelperAPI -> Input [label="refresh_nameplates\nget_nameplate_completions\nchoose_nameplate\nget_word_completions\nchoose_words" color="orange" fontcolor="orange"]
|
||||
|
||||
Code -> Allocator [style="dashed" color="red" fontcolor="red"
|
||||
label="allocate"]
|
||||
Allocator -> Code [style="dashed" label="allocated"]
|
||||
|
||||
Nameplate -> Terminator [style="dashed" label="nameplate_done"]
|
||||
Mailbox -> Terminator [style="dashed" label="mailbox_done"]
|
||||
Terminator -> Nameplate [style="dashed" label="close"]
|
||||
Terminator -> Mailbox [style="dashed" label="close"]
|
||||
Terminator -> Connection [style="dashed" label="stop"]
|
||||
Connection -> Terminator [style="dashed" label="stopped"]
|
||||
Terminator -> Boss [style="dashed" label="closed\n(once)"]
|
||||
Boss -> Terminator [style="dashed" color="red" fontcolor="red"
|
||||
label="close"]
|
||||
}
|
98
docs/state-machines/mailbox.dot
Normal file
98
docs/state-machines/mailbox.dot
Normal file
|
@ -0,0 +1,98 @@
|
|||
digraph {
|
||||
/* new idea */
|
||||
|
||||
title [label="Mailbox\nMachine" style="dotted"]
|
||||
|
||||
{rank=same; S0A S0B}
|
||||
S0A [label="S0A:\nunknown"]
|
||||
S0A -> S0B [label="connected"]
|
||||
S0B [label="S0B:\nunknown\n(bound)" color="orange"]
|
||||
|
||||
S0B -> S0A [label="lost"]
|
||||
|
||||
S0A -> P0A_queue [label="add_message" style="dotted"]
|
||||
P0A_queue [shape="box" label="queue" style="dotted"]
|
||||
P0A_queue -> S0A [style="dotted"]
|
||||
S0B -> P0B_queue [label="add_message" style="dotted"]
|
||||
P0B_queue [shape="box" label="queue" style="dotted"]
|
||||
P0B_queue -> S0B [style="dotted"]
|
||||
|
||||
subgraph {rank=same; S1A P_open}
|
||||
S0A -> S1A [label="got_mailbox"]
|
||||
S1A [label="S1A:\nknown"]
|
||||
S1A -> P_open [label="connected"]
|
||||
S1A -> P1A_queue [label="add_message" style="dotted"]
|
||||
P1A_queue [shape="box" label="queue" style="dotted"]
|
||||
P1A_queue -> S1A [style="dotted"]
|
||||
S1A -> S2A [style="invis"]
|
||||
P_open -> P2_connected [style="invis"]
|
||||
|
||||
S0A -> S2A [style="invis"]
|
||||
S0B -> P_open [label="got_mailbox" color="orange" fontcolor="orange"]
|
||||
P_open [shape="box"
|
||||
label="store mailbox\nRC.tx_open\nRC.tx_add(queued)" color="orange"]
|
||||
P_open -> S2B [color="orange"]
|
||||
|
||||
subgraph {rank=same; S2A S2B P2_connected}
|
||||
S2A [label="S2A:\nknown\nmaybe opened"]
|
||||
S2B [label="S2B:\nopened\n(bound)" color="green"]
|
||||
S2A -> P2_connected [label="connected"]
|
||||
S2B -> S2A [label="lost"]
|
||||
|
||||
P2_connected [shape="box" label="RC.tx_open\nRC.tx_add(queued)"]
|
||||
P2_connected -> S2B
|
||||
|
||||
S2A -> P2_queue [label="add_message" style="dotted"]
|
||||
P2_queue [shape="box" label="queue" style="dotted"]
|
||||
P2_queue -> S2A [style="dotted"]
|
||||
|
||||
S2B -> P2_send [label="add_message"]
|
||||
P2_send [shape="box" label="queue\nRC.tx_add(msg)"]
|
||||
P2_send -> S2B
|
||||
|
||||
{rank=same; P2_send P2_close P2_process_theirs}
|
||||
P2_process_theirs -> P2_close [style="invis"]
|
||||
S2B -> P2_process_ours [label="rx_message\n(ours)"]
|
||||
P2_process_ours [shape="box" label="dequeue"]
|
||||
P2_process_ours -> S2B
|
||||
S2B -> P2_process_theirs [label="rx_message\n(theirs)"
|
||||
color="orange" fontcolor="orange"]
|
||||
P2_process_theirs [shape="box" color="orange"
|
||||
label="N.release\nO.got_message if new\nrecord"
|
||||
]
|
||||
P2_process_theirs -> S2B [color="orange"]
|
||||
|
||||
S2B -> P2_close [label="close" color="red"]
|
||||
P2_close [shape="box" label="RC.tx_close" color="red"]
|
||||
P2_close -> S3B [color="red"]
|
||||
|
||||
subgraph {rank=same; S3A P3_connected S3B}
|
||||
S3A [label="S3A:\nclosing"]
|
||||
S3A -> P3_connected [label="connected"]
|
||||
P3_connected [shape="box" label="RC.tx_close"]
|
||||
P3_connected -> S3B
|
||||
#S3A -> S3A [label="add_message"] # implicit
|
||||
S3B [label="S3B:\nclosing\n(bound)" color="red"]
|
||||
S3B -> S3B [label="add_message\nrx_message\nclose"]
|
||||
S3B -> S3A [label="lost"]
|
||||
|
||||
subgraph {rank=same; P3A_done P3B_done}
|
||||
P3A_done [shape="box" label="T.mailbox_done" color="red"]
|
||||
P3A_done -> S4A
|
||||
S3B -> P3B_done [label="rx_closed" color="red"]
|
||||
P3B_done [shape="box" label="T.mailbox_done" color="red"]
|
||||
P3B_done -> S4B
|
||||
|
||||
subgraph {rank=same; S4A S4B}
|
||||
S4A [label="S4A:\nclosed"]
|
||||
S4B [label="S4B:\nclosed"]
|
||||
S4A -> S4B [label="connected"]
|
||||
S4B -> S4A [label="lost"]
|
||||
S4B -> S4B [label="add_message\nrx_message\nclose"] # is "close" needed?
|
||||
|
||||
S0A -> P3A_done [label="close" color="red"]
|
||||
S0B -> P3B_done [label="close" color="red"]
|
||||
S1A -> P3A_done [label="close" color="red"]
|
||||
S2A -> S3A [label="close" color="red"]
|
||||
|
||||
}
|
101
docs/state-machines/nameplate.dot
Normal file
101
docs/state-machines/nameplate.dot
Normal file
|
@ -0,0 +1,101 @@
|
|||
digraph {
|
||||
/* new idea */
|
||||
|
||||
title [label="Nameplate\nMachine" style="dotted"]
|
||||
title -> S0A [style="invis"]
|
||||
|
||||
{rank=same; S0A S0B}
|
||||
S0A [label="S0A:\nknow nothing"]
|
||||
S0B [label="S0B:\nknow nothing\n(bound)" color="orange"]
|
||||
S0A -> S0B [label="connected"]
|
||||
S0B -> S0A [label="lost"]
|
||||
|
||||
S0A -> S1A [label="set_nameplate"]
|
||||
S0B -> P2_connected [label="set_nameplate" color="orange" fontcolor="orange"]
|
||||
|
||||
S1A [label="S1A:\nnever claimed"]
|
||||
S1A -> P2_connected [label="connected"]
|
||||
|
||||
S1A -> S2A [style="invis"]
|
||||
S1B [style="invis"]
|
||||
S0B -> S1B [style="invis"]
|
||||
S1B -> S2B [style="invis"]
|
||||
{rank=same; S1A S1B}
|
||||
S1A -> S1B [style="invis"]
|
||||
|
||||
{rank=same; S2A P2_connected S2B}
|
||||
S2A [label="S2A:\nmaybe claimed"]
|
||||
S2A -> P2_connected [label="connected"]
|
||||
P2_connected [shape="box"
|
||||
label="RC.tx_claim" color="orange"]
|
||||
P2_connected -> S2B [color="orange"]
|
||||
S2B [label="S2B:\nmaybe claimed\n(bound)" color="orange"]
|
||||
|
||||
#S2B -> S2A [label="lost"] # causes bad layout
|
||||
S2B -> foo2 [label="lost"]
|
||||
foo2 [label="" style="dashed"]
|
||||
foo2 -> S2A
|
||||
|
||||
S2A -> S3A [label="(none)" style="invis"]
|
||||
S2B -> P_open [label="rx_claimed" color="orange" fontcolor="orange"]
|
||||
P_open [shape="box" label="I.got_wordlist\nM.got_mailbox" color="orange"]
|
||||
P_open -> S3B [color="orange"]
|
||||
|
||||
subgraph {rank=same; S3A S3B}
|
||||
S3A [label="S3A:\nclaimed"]
|
||||
S3B [label="S3B:\nclaimed\n(bound)" color="orange"]
|
||||
S3A -> S3B [label="connected"]
|
||||
S3B -> foo3 [label="lost"]
|
||||
foo3 [label="" style="dashed"]
|
||||
foo3 -> S3A
|
||||
|
||||
#S3B -> S3B [label="rx_claimed"] # shouldn't happen
|
||||
|
||||
S3B -> P3_release [label="release" color="orange" fontcolor="orange"]
|
||||
P3_release [shape="box" color="orange" label="RC.tx_release"]
|
||||
P3_release -> S4B [color="orange"]
|
||||
|
||||
subgraph {rank=same; S4A P4_connected S4B}
|
||||
S4A [label="S4A:\nmaybe released\n"]
|
||||
|
||||
S4B [label="S4B:\nmaybe released\n(bound)" color="orange"]
|
||||
S4A -> P4_connected [label="connected"]
|
||||
P4_connected [shape="box" label="RC.tx_release"]
|
||||
S4B -> S4B [label="release"]
|
||||
|
||||
P4_connected -> S4B
|
||||
S4B -> foo4 [label="lost"]
|
||||
foo4 [label="" style="dashed"]
|
||||
foo4 -> S4A
|
||||
|
||||
S4A -> S5B [style="invis"]
|
||||
P4_connected -> S5B [style="invis"]
|
||||
|
||||
subgraph {rank=same; P5A_done P5B_done}
|
||||
S4B -> P5B_done [label="rx released" color="orange" fontcolor="orange"]
|
||||
P5B_done [shape="box" label="T.nameplate_done" color="orange"]
|
||||
P5B_done -> S5B [color="orange"]
|
||||
|
||||
subgraph {rank=same; S5A S5B}
|
||||
S5A [label="S5A:\nreleased"]
|
||||
S5A -> S5B [label="connected"]
|
||||
S5B -> S5A [label="lost"]
|
||||
S5B [label="S5B:\nreleased" color="green"]
|
||||
|
||||
S5B -> S5B [label="release\nclose"]
|
||||
|
||||
P5A_done [shape="box" label="T.nameplate_done"]
|
||||
P5A_done -> S5A
|
||||
|
||||
S0A -> P5A_done [label="close" color="red"]
|
||||
S1A -> P5A_done [label="close" color="red"]
|
||||
S2A -> S4A [label="close" color="red"]
|
||||
S3A -> S4A [label="close" color="red"]
|
||||
S4A -> S4A [label="close" color="red"]
|
||||
S0B -> P5B_done [label="close" color="red"]
|
||||
S2B -> P3_release [label="close" color="red"]
|
||||
S3B -> P3_release [label="close" color="red"]
|
||||
S4B -> S4B [label="close" color="red"]
|
||||
|
||||
|
||||
}
|
35
docs/state-machines/order.dot
Normal file
35
docs/state-machines/order.dot
Normal file
|
@ -0,0 +1,35 @@
|
|||
digraph {
|
||||
start [label="Order\nMachine" style="dotted"]
|
||||
/* our goal: deliver PAKE before anything else */
|
||||
|
||||
{rank=same; S0 P0_other}
|
||||
{rank=same; S1 P1_other}
|
||||
|
||||
S0 [label="S0: no pake" color="orange"]
|
||||
S1 [label="S1: yes pake" color="green"]
|
||||
S0 -> P0_pake [label="got_pake"
|
||||
color="orange" fontcolor="orange"]
|
||||
P0_pake [shape="box" color="orange"
|
||||
label="K.got_pake\ndrain queue:\n[R.got_message]"
|
||||
]
|
||||
P0_pake -> S1 [color="orange"]
|
||||
S0 -> P0_other [label="got_version\ngot_phase" style="dotted"]
|
||||
P0_other [shape="box" label="queue" style="dotted"]
|
||||
P0_other -> S0 [style="dotted"]
|
||||
|
||||
S1 -> P1_other [label="got_version\ngot_phase"]
|
||||
P1_other [shape="box" label="R.got_message"]
|
||||
P1_other -> S1
|
||||
|
||||
|
||||
/* the Mailbox will deliver each message exactly once, but doesn't
|
||||
guarantee ordering: if Alice starts the process, then disconnects,
|
||||
then Bob starts (reading PAKE, sending both his PAKE and his VERSION
|
||||
phase), then Alice will see both PAKE and VERSION on her next
|
||||
connect, and might get the VERSION first.
|
||||
|
||||
The Wormhole will queue inbound messages that it isn't ready for. The
|
||||
wormhole shim that lets applications do w.get(phase=) must do
|
||||
something similar, queueing inbound messages until it sees one for
|
||||
the phase it currently cares about.*/
|
||||
}
|
39
docs/state-machines/receive.dot
Normal file
39
docs/state-machines/receive.dot
Normal file
|
@ -0,0 +1,39 @@
|
|||
digraph {
|
||||
|
||||
/* could shave a RTT by committing to the nameplate early, before
|
||||
finishing the rest of the code input. While the user is still
|
||||
typing/completing the code, we claim the nameplate, open the mailbox,
|
||||
and retrieve the peer's PAKE message. Then as soon as the user
|
||||
finishes entering the code, we build our own PAKE message, send PAKE,
|
||||
compute the key, send VERSION. Starting from the Return, this saves
|
||||
two round trips. OTOH it adds consequences to hitting Tab. */
|
||||
|
||||
start [label="Receive\nMachine" style="dotted"]
|
||||
|
||||
S0 [label="S0:\nunknown key" color="orange"]
|
||||
S0 -> P0_got_key [label="got_key" color="orange"]
|
||||
|
||||
P0_got_key [shape="box" label="record key" color="orange"]
|
||||
P0_got_key -> S1 [color="orange"]
|
||||
|
||||
S1 [label="S1:\nunverified key" color="orange"]
|
||||
S1 -> P_mood_scary [label="got_message\n(bad)"]
|
||||
S1 -> P1_accept_msg [label="got_message\n(good)" color="orange"]
|
||||
P1_accept_msg [shape="box" label="S.got_verified_key\nB.happy\nB.got_verifier\nB.got_message"
|
||||
color="orange"]
|
||||
P1_accept_msg -> S2 [color="orange"]
|
||||
|
||||
S2 [label="S2:\nverified key" color="green"]
|
||||
|
||||
S2 -> P2_accept_msg [label="got_message\n(good)" color="orange"]
|
||||
S2 -> P_mood_scary [label="got_message(bad)"]
|
||||
|
||||
P2_accept_msg [label="B.got_message" shape="box" color="orange"]
|
||||
P2_accept_msg -> S2 [color="orange"]
|
||||
|
||||
P_mood_scary [shape="box" label="B.scared" color="red"]
|
||||
P_mood_scary -> S3 [color="red"]
|
||||
|
||||
S3 [label="S3:\nscared" color="red"]
|
||||
S3 -> S3 [label="got_message"]
|
||||
}
|
19
docs/state-machines/send.dot
Normal file
19
docs/state-machines/send.dot
Normal file
|
@ -0,0 +1,19 @@
|
|||
digraph {
|
||||
start [label="Send\nMachine" style="dotted"]
|
||||
|
||||
{rank=same; S0 P0_queue}
|
||||
{rank=same; S1 P1_send}
|
||||
|
||||
S0 [label="S0: unknown\nkey"]
|
||||
S0 -> P0_queue [label="send" style="dotted"]
|
||||
P0_queue [shape="box" label="queue" style="dotted"]
|
||||
P0_queue -> S0 [style="dotted"]
|
||||
S0 -> P0_got_key [label="got_verified_key"]
|
||||
|
||||
P0_got_key [shape="box" label="drain queue:\n[encrypt\n M.add_message]"]
|
||||
P0_got_key -> S1
|
||||
S1 [label="S1: verified\nkey"]
|
||||
S1 -> P1_send [label="send"]
|
||||
P1_send [shape="box" label="encrypt\nM.add_message"]
|
||||
P1_send -> S1
|
||||
}
|
50
docs/state-machines/terminator.dot
Normal file
50
docs/state-machines/terminator.dot
Normal file
|
@ -0,0 +1,50 @@
|
|||
digraph {
|
||||
/* M_close pathways */
|
||||
title [label="Terminator\nMachine" style="dotted"]
|
||||
|
||||
initial [style="invis"]
|
||||
initial -> Snmo [style="dashed"]
|
||||
|
||||
Snmo [label="Snmo:\nnameplate active\nmailbox active\nopen" color="orange"]
|
||||
Sno [label="Sno:\nnameplate active\nmailbox done\nopen"]
|
||||
Smo [label="Smo:\nnameplate done\nmailbox active\nopen" color="green"]
|
||||
S0o [label="S0o:\nnameplate done\nmailbox done\nopen"]
|
||||
|
||||
Snmo -> Sno [label="mailbox_done"]
|
||||
Snmo -> Smo [label="nameplate_done" color="orange"]
|
||||
Sno -> S0o [label="nameplate_done"]
|
||||
Smo -> S0o [label="mailbox_done"]
|
||||
|
||||
Snmo -> Snm [label="close"]
|
||||
Sno -> Sn [label="close"]
|
||||
Smo -> Sm [label="close" color="red"]
|
||||
S0o -> P_stop [label="close"]
|
||||
|
||||
Snm [label="Snm:\nnameplate active\nmailbox active\nclosing"
|
||||
style="dashed"]
|
||||
Sn [label="Sn:\nnameplate active\nmailbox done\nclosing"
|
||||
style="dashed"]
|
||||
Sm [label="Sm:\nnameplate done\nmailbox active\nclosing"
|
||||
style="dashed" color="red"]
|
||||
|
||||
Snm -> Sn [label="mailbox_done"]
|
||||
Snm -> Sm [label="nameplate_done"]
|
||||
Sn -> P_stop [label="nameplate_done"]
|
||||
Sm -> P_stop [label="mailbox_done" color="red"]
|
||||
|
||||
{rank=same; S_stopping Pss S_stopped}
|
||||
P_stop [shape="box" label="RC.stop" color="red"]
|
||||
P_stop -> S_stopping [color="red"]
|
||||
|
||||
S_stopping [label="S_stopping" color="red"]
|
||||
S_stopping -> Pss [label="stopped"]
|
||||
Pss [shape="box" label="B.closed"]
|
||||
Pss -> S_stopped
|
||||
|
||||
S_stopped [label="S_stopped"]
|
||||
|
||||
other [shape="box" style="dashed"
|
||||
label="close -> N.close, M.close"]
|
||||
|
||||
|
||||
}
|
86
docs/w.dot
Normal file
86
docs/w.dot
Normal file
|
@ -0,0 +1,86 @@
|
|||
digraph {
|
||||
|
||||
/*
|
||||
NM_start [label="Nameplate\nMachine" style="dotted"]
|
||||
NM_start -> NM_S_unclaimed [style="invis"]
|
||||
NM_S_unclaimed [label="no nameplate"]
|
||||
NM_S_unclaimed -> NM_S_unclaimed [label="NM_release()"]
|
||||
NM_P_set_nameplate [shape="box" label="post_claim()"]
|
||||
NM_S_unclaimed -> NM_P_set_nameplate [label="NM_set_nameplate()"]
|
||||
NM_S_claiming [label="claim pending"]
|
||||
NM_P_set_nameplate -> NM_S_claiming
|
||||
NM_S_claiming -> NM_P_rx_claimed [label="rx claimed"]
|
||||
NM_P_rx_claimed [label="MM_set_mailbox()" shape="box"]
|
||||
NM_P_rx_claimed -> NM_S_claimed
|
||||
NM_S_claimed [label="claimed"]
|
||||
NM_S_claimed -> NM_P_release [label="NM_release()"]
|
||||
NM_P_release [shape="box" label="post_release()"]
|
||||
NM_P_release -> NM_S_releasing
|
||||
NM_S_releasing [label="release pending"]
|
||||
NM_S_releasing -> NM_S_releasing [label="NM_release()"]
|
||||
NM_S_releasing -> NM_S_released [label="rx released"]
|
||||
NM_S_released [label="released"]
|
||||
NM_S_released -> NM_S_released [label="NM_release()"]
|
||||
*/
|
||||
|
||||
/*
|
||||
MM_start [label="Mailbox\nMachine" style="dotted"]
|
||||
MM_start -> MM_S_want_mailbox [style="invis"]
|
||||
MM_S_want_mailbox [label="want mailbox"]
|
||||
MM_S_want_mailbox -> MM_P_queue1 [label="MM_send()" style="dotted"]
|
||||
MM_P_queue1 [shape="box" style="dotted" label="queue message"]
|
||||
MM_P_queue1 -> MM_S_want_mailbox [style="dotted"]
|
||||
MM_P_open_mailbox [shape="box" label="post_open()"]
|
||||
MM_S_want_mailbox -> MM_P_open_mailbox [label="set_mailbox()"]
|
||||
MM_P_send_queued [shape="box" label="post add() for\nqueued messages"]
|
||||
MM_P_open_mailbox -> MM_P_send_queued
|
||||
MM_P_send_queued -> MM_S_open
|
||||
MM_S_open [label="open\n(unused)"]
|
||||
MM_S_open -> MM_P_send1 [label="MM_send()"]
|
||||
MM_P_send1 [shape="box" label="post add()\nfor message"]
|
||||
MM_P_send1 -> MM_S_open
|
||||
MM_S_open -> MM_P_release1 [label="MM_close()"]
|
||||
MM_P_release1 [shape="box" label="NM_release()"]
|
||||
MM_P_release1 -> MM_P_close
|
||||
|
||||
MM_S_open -> MM_P_rx [label="rx message"]
|
||||
MM_P_rx [shape="box" label="WM_rx_pake()\nor WM_rx_msg()"]
|
||||
MM_P_rx -> MM_P_release2
|
||||
MM_P_release2 [shape="box" label="NM_release()"]
|
||||
MM_P_release2 -> MM_S_used
|
||||
MM_S_used [label="open\n(used)"]
|
||||
MM_S_used -> MM_P_rx [label="rx message"]
|
||||
MM_S_used -> MM_P_send2 [label="MM_send()"]
|
||||
MM_P_send2 [shape="box" label="post add()\nfor message"]
|
||||
MM_P_send2 -> MM_S_used
|
||||
MM_S_used -> MM_P_close [label="MM_close()"]
|
||||
MM_P_close [shape="box" label="post_close(mood)"]
|
||||
MM_P_close -> MM_S_closing
|
||||
MM_S_closing [label="waiting"]
|
||||
MM_S_closing -> MM_S_closing [label="MM_close()"]
|
||||
MM_S_closing -> MM_S_closed [label="rx closed"]
|
||||
MM_S_closed [label="closed"]
|
||||
MM_S_closed -> MM_S_closed [label="MM_close()"]
|
||||
*/
|
||||
|
||||
/* upgrading to new PAKE algorithm, the slower form (the faster form
|
||||
puts the pake_abilities record in the nameplate_info message) */
|
||||
/*
|
||||
P2_start [label="(PAKE\nupgrade)\nstart"]
|
||||
P2_start -> P2_P_send_abilities [label="set_code()"]
|
||||
P2_P_send_abilities [shape="box" label="send pake_abilities"]
|
||||
P2_P_send_abilities -> P2_wondering
|
||||
P2_wondering [label="waiting\nwondering"]
|
||||
P2_wondering -> P2_P_send_pakev1 [label="rx pake_v1"]
|
||||
P2_P_send_pakev1 [shape="box" label="send pake_v1"]
|
||||
P2_P_send_pakev1 -> P2_P_process_v1
|
||||
P2_P_process_v1 [shape="box" label="process v1"]
|
||||
P2_wondering -> P2_P_find_max [label="rx pake_abilities"]
|
||||
P2_P_find_max [shape="box" label="find max"]
|
||||
P2_P_find_max -> P2_P_send_pakev2
|
||||
P2_P_send_pakev2
|
||||
P2_P_send_pakev2 [shape="box" label="send pake_v2"]
|
||||
P2_P_send_pakev2 -> P2_P_process_v2 [label="rx pake_v2"]
|
||||
P2_P_process_v2 [shape="box" label="process v2"]
|
||||
*/
|
||||
}
|
270
misc/demo-journal.py
Normal file
270
misc/demo-journal.py
Normal file
|
@ -0,0 +1,270 @@
|
|||
import os, sys, json
|
||||
from twisted.internet import task, defer, endpoints
|
||||
from twisted.application import service, internet
|
||||
from twisted.web import server, static, resource
|
||||
from wormhole import journal, wormhole
|
||||
|
||||
# considerations for state management:
|
||||
# * be somewhat principled about the data (e.g. have a schema)
|
||||
# * discourage accidental schema changes
|
||||
# * avoid surprise mutations by app code (don't hand out mutables)
|
||||
# * discourage app from keeping state itself: make state object easy enough
|
||||
# to use for everything. App should only hold objects that are active
|
||||
# (Services, subscribers, etc). App must wire up these objects each time.
|
||||
|
||||
class State(object):
|
||||
@classmethod
|
||||
def create_empty(klass):
|
||||
self = klass()
|
||||
# to avoid being tripped up by state-mutation side-effect bugs, we
|
||||
# hold the serialized state in RAM, and re-deserialize it each time
|
||||
# someone asks for a piece of it.
|
||||
empty = {"version": 1,
|
||||
"invitations": {}, # iid->invitation_state
|
||||
"contacts": [],
|
||||
}
|
||||
self._bytes = json.dumps(empty).encode("utf-8")
|
||||
return self
|
||||
|
||||
@classmethod
|
||||
def from_filename(klass, fn):
|
||||
self = klass()
|
||||
with open(fn, "rb") as f:
|
||||
bytes = f.read()
|
||||
self._bytes = bytes
|
||||
# version check
|
||||
data = self._as_data()
|
||||
assert data["version"] == 1
|
||||
# schema check?
|
||||
return self
|
||||
|
||||
def save_to_filename(self, fn):
|
||||
tmpfn = fn+".tmp"
|
||||
with open(tmpfn, "wb") as f:
|
||||
f.write(self._bytes)
|
||||
os.rename(tmpfn, fn)
|
||||
|
||||
def _as_data(self):
|
||||
return json.loads(bytes.decode("utf-8"))
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _mutate(self):
|
||||
data = self._as_data()
|
||||
yield data # mutable
|
||||
self._bytes = json.dumps(data).encode("utf-8")
|
||||
|
||||
def get_all_invitations(self):
|
||||
return self._as_data()["invitations"]
|
||||
def add_invitation(self, iid, invitation_state):
|
||||
with self._mutate() as data:
|
||||
data["invitations"][iid] = invitation_state
|
||||
def update_invitation(self, iid, invitation_state):
|
||||
with self._mutate() as data:
|
||||
assert iid in data["invitations"]
|
||||
data["invitations"][iid] = invitation_state
|
||||
def remove_invitation(self, iid):
|
||||
with self._mutate() as data:
|
||||
del data["invitations"][iid]
|
||||
|
||||
def add_contact(self, contact):
|
||||
with self._mutate() as data:
|
||||
data["contacts"].append(contact)
|
||||
|
||||
|
||||
|
||||
class Root(resource.Resource):
|
||||
pass
|
||||
|
||||
class Status(resource.Resource):
|
||||
def __init__(self, c):
|
||||
resource.Resource.__init__(self)
|
||||
self._call = c
|
||||
def render_GET(self, req):
|
||||
data = self._call()
|
||||
req.setHeader(b"content-type", "text/plain")
|
||||
return data
|
||||
|
||||
class Action(resource.Resource):
|
||||
def __init__(self, c):
|
||||
resource.Resource.__init__(self)
|
||||
self._call = c
|
||||
def render_POST(self, req):
|
||||
req.setHeader(b"content-type", "text/plain")
|
||||
try:
|
||||
args = json.load(req.content)
|
||||
except ValueError:
|
||||
req.setResponseCode(500)
|
||||
return b"bad JSON"
|
||||
data = self._call(args)
|
||||
return data
|
||||
|
||||
class Agent(service.MultiService):
|
||||
def __init__(self, basedir, reactor):
|
||||
service.MultiService.__init__(self)
|
||||
self._basedir = basedir
|
||||
self._reactor = reactor
|
||||
|
||||
root = Root()
|
||||
site = server.Site(root)
|
||||
ep = endpoints.serverFromString(reactor, "tcp:8220")
|
||||
internet.StreamServerEndpointService(ep, site).setServiceParent(self)
|
||||
|
||||
self._jm = journal.JournalManager(self._save_state)
|
||||
|
||||
root.putChild(b"", static.Data("root", "text/plain"))
|
||||
root.putChild(b"list-invitations", Status(self._list_invitations))
|
||||
root.putChild(b"invite", Action(self._invite)) # {petname:}
|
||||
root.putChild(b"accept", Action(self._accept)) # {petname:, code:}
|
||||
|
||||
self._state_fn = os.path.join(self._basedir, "state.json")
|
||||
self._state = State.from_filename(self._state_fn)
|
||||
|
||||
self._wormholes = {}
|
||||
for iid, invitation_state in self._state.get_all_invitations().items():
|
||||
def _dispatch(event, *args, **kwargs):
|
||||
self._dispatch_wormhole_event(iid, event, *args, **kwargs)
|
||||
w = wormhole.journaled_from_data(invitation_state["wormhole"],
|
||||
reactor=self._reactor,
|
||||
journal=self._jm,
|
||||
event_handler=self,
|
||||
event_handler_args=(iid,))
|
||||
self._wormholes[iid] = w
|
||||
w.setServiceParent(self)
|
||||
|
||||
|
||||
def _save_state(self):
|
||||
self._state.save_to_filename(self._state_fn)
|
||||
|
||||
def _list_invitations(self):
|
||||
inv = self._state.get_all_invitations()
|
||||
lines = ["%d: %s" % (iid, inv[iid]) for iid in sorted(inv)]
|
||||
return b"\n".join(lines)+b"\n"
|
||||
|
||||
def _invite(self, args):
|
||||
print "invite", args
|
||||
petname = args["petname"]
|
||||
# it'd be better to use a unique object for the event_handler
|
||||
# correlation, but we can't store them into the state database. I'm
|
||||
# not 100% sure we need one for the database: maybe it should hold a
|
||||
# list instead, and assign lookup keys at runtime. If they really
|
||||
# need to be serializable, they should be allocated rather than
|
||||
# random.
|
||||
iid = random.randint(1,1000)
|
||||
my_pubkey = random.randint(1,1000)
|
||||
with self._jm.process():
|
||||
w = wormhole.journaled(reactor=self._reactor, journal=self._jm,
|
||||
event_handler=self,
|
||||
event_handler_args=(iid,))
|
||||
self._wormholes[iid] = w
|
||||
w.setServiceParent(self)
|
||||
w.get_code() # event_handler means code returns via callback
|
||||
invitation_state = {"wormhole": w.to_data(),
|
||||
"petname": petname,
|
||||
"my_pubkey": my_pubkey,
|
||||
}
|
||||
self._state.add_invitation(iid, invitation_state)
|
||||
return b"ok"
|
||||
|
||||
def _accept(self, args):
|
||||
print "accept", args
|
||||
petname = args["petname"]
|
||||
code = args["code"]
|
||||
iid = random.randint(1,1000)
|
||||
my_pubkey = random.randint(2,2000)
|
||||
with self._jm.process():
|
||||
w = wormhole.journaled(reactor=self._reactor, journal=self._jm,
|
||||
event_dispatcher=self,
|
||||
event_dispatcher_args=(iid,))
|
||||
w.set_code(code)
|
||||
md = {"my_pubkey": my_pubkey}
|
||||
w.send(json.dumps(md).encode("utf-8"))
|
||||
invitation_state = {"wormhole": w.to_data(),
|
||||
"petname": petname,
|
||||
"my_pubkey": my_pubkey,
|
||||
}
|
||||
self._state.add_invitation(iid, invitation_state)
|
||||
return b"ok"
|
||||
|
||||
# dispatch options:
|
||||
# * register one function, which takes (eventname, *args)
|
||||
# * to handle multiple wormholes, app must give is a closure
|
||||
# * register multiple functions (one per event type)
|
||||
# * register an object, with well-known method names
|
||||
# * extra: register args and/or kwargs with the callback
|
||||
#
|
||||
# events to dispatch:
|
||||
# generated_code(code)
|
||||
# got_verifier(verifier_bytes)
|
||||
# verified()
|
||||
# got_data(data_bytes)
|
||||
# closed()
|
||||
|
||||
def wormhole_dispatch_got_code(self, code, iid):
|
||||
# we're already in a jm.process() context
|
||||
invitation_state = self._state.get_all_invitations()[iid]
|
||||
invitation_state["code"] = code
|
||||
self._state.update_invitation(iid, invitation_state)
|
||||
self._wormholes[iid].set_code(code)
|
||||
# notify UI subscribers to update the display
|
||||
|
||||
def wormhole_dispatch_got_verifier(self, verifier, iid):
|
||||
pass
|
||||
def wormhole_dispatch_verified(self, _, iid):
|
||||
pass
|
||||
|
||||
def wormhole_dispatch_got_data(self, data, iid):
|
||||
invitation_state = self._state.get_all_invitations()[iid]
|
||||
md = json.loads(data.decode("utf-8"))
|
||||
contact = {"petname": invitation_state["petname"],
|
||||
"my_pubkey": invitation_state["my_pubkey"],
|
||||
"their_pubkey": md["my_pubkey"],
|
||||
}
|
||||
self._state.add_contact(contact)
|
||||
self._wormholes[iid].close() # now waiting for "closed"
|
||||
|
||||
def wormhole_dispatch_closed(self, _, iid):
|
||||
self._wormholes[iid].disownServiceParent()
|
||||
del self._wormholes[iid]
|
||||
self._state.remove_invitation(iid)
|
||||
|
||||
|
||||
def handle_app_event(self, args, ack_f): # sample function
|
||||
# Imagine here that the app has received a message (not
|
||||
# wormhole-related) from some other server, and needs to act on it.
|
||||
# Also imagine that ack_f() is how we tell the sender that they can
|
||||
# stop sending the message, or how we ask our poller/subscriber
|
||||
# client to send a DELETE message. If the process dies before ack_f()
|
||||
# delivers whatever it needs to deliver, then in the next launch,
|
||||
# handle_app_event() will be called again.
|
||||
stuff = parse(args)
|
||||
with self._jm.process():
|
||||
update_my_state()
|
||||
self._jm.queue_outbound(ack_f)
|
||||
|
||||
def create(reactor, basedir):
|
||||
os.mkdir(basedir)
|
||||
s = State.create_empty()
|
||||
s.save(os.path.join(basedir, "state.json"))
|
||||
return defer.succeed(None)
|
||||
|
||||
def run(reactor, basedir):
|
||||
a = Agent(basedir, reactor)
|
||||
a.startService()
|
||||
print "agent listening on http://localhost:8220/"
|
||||
d = defer.Deferred()
|
||||
return d
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
command = sys.argv[1]
|
||||
basedir = sys.argv[2]
|
||||
if command == "create":
|
||||
task.react(create, (basedir,))
|
||||
elif command == "run":
|
||||
task.react(run, (basedir,))
|
||||
else:
|
||||
print "Unrecognized subcommand '%s'" % command
|
||||
sys.exit(1)
|
||||
|
||||
|
1
setup.py
1
setup.py
|
@ -45,6 +45,7 @@ setup(name="magic-wormhole",
|
|||
"six",
|
||||
"twisted[tls]",
|
||||
"autobahn[twisted] >= 0.14.1",
|
||||
"automat",
|
||||
"hkdf", "tqdm",
|
||||
"click",
|
||||
"humanize",
|
||||
|
|
|
@ -2,3 +2,8 @@
|
|||
from ._version import get_versions
|
||||
__version__ = get_versions()['version']
|
||||
del get_versions
|
||||
|
||||
from .wormhole import create
|
||||
from ._rlcompleter import input_with_completion
|
||||
|
||||
__all__ = ["create", "input_with_completion", "__version__"]
|
||||
|
|
75
src/wormhole/_allocator.py
Normal file
75
src/wormhole/_allocator.py
Normal file
|
@ -0,0 +1,75 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.IAllocator)
|
||||
class Allocator(object):
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def wire(self, rendezvous_connector, code):
|
||||
self._RC = _interfaces.IRendezvousConnector(rendezvous_connector)
|
||||
self._C = _interfaces.ICode(code)
|
||||
|
||||
@m.state(initial=True)
|
||||
def S0A_idle(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S0B_idle_connected(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1A_allocating(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1B_allocating_connected(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S2_done(self): pass # pragma: no cover
|
||||
|
||||
# from Code
|
||||
@m.input()
|
||||
def allocate(self, length, wordlist): pass
|
||||
|
||||
# from RendezvousConnector
|
||||
@m.input()
|
||||
def connected(self): pass
|
||||
@m.input()
|
||||
def lost(self): pass
|
||||
@m.input()
|
||||
def rx_allocated(self, nameplate): pass
|
||||
|
||||
@m.output()
|
||||
def stash(self, length, wordlist):
|
||||
self._length = length
|
||||
self._wordlist = _interfaces.IWordlist(wordlist)
|
||||
@m.output()
|
||||
def stash_and_RC_rx_allocate(self, length, wordlist):
|
||||
self._length = length
|
||||
self._wordlist = _interfaces.IWordlist(wordlist)
|
||||
self._RC.tx_allocate()
|
||||
@m.output()
|
||||
def RC_tx_allocate(self):
|
||||
self._RC.tx_allocate()
|
||||
@m.output()
|
||||
def build_and_notify(self, nameplate):
|
||||
words = self._wordlist.choose_words(self._length)
|
||||
code = nameplate + "-" + words
|
||||
self._C.allocated(nameplate, code)
|
||||
|
||||
S0A_idle.upon(connected, enter=S0B_idle_connected, outputs=[])
|
||||
S0B_idle_connected.upon(lost, enter=S0A_idle, outputs=[])
|
||||
|
||||
S0A_idle.upon(allocate, enter=S1A_allocating, outputs=[stash])
|
||||
S0B_idle_connected.upon(allocate, enter=S1B_allocating_connected,
|
||||
outputs=[stash_and_RC_rx_allocate])
|
||||
|
||||
S1A_allocating.upon(connected, enter=S1B_allocating_connected,
|
||||
outputs=[RC_tx_allocate])
|
||||
S1B_allocating_connected.upon(lost, enter=S1A_allocating, outputs=[])
|
||||
|
||||
S1B_allocating_connected.upon(rx_allocated, enter=S2_done,
|
||||
outputs=[build_and_notify])
|
||||
|
||||
S2_done.upon(connected, enter=S2_done, outputs=[])
|
||||
S2_done.upon(lost, enter=S2_done, outputs=[])
|
343
src/wormhole/_boss.py
Normal file
343
src/wormhole/_boss.py
Normal file
|
@ -0,0 +1,343 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
import re
|
||||
import six
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides, instance_of
|
||||
from twisted.python import log
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
from ._nameplate import Nameplate
|
||||
from ._mailbox import Mailbox
|
||||
from ._send import Send
|
||||
from ._order import Order
|
||||
from ._key import Key
|
||||
from ._receive import Receive
|
||||
from ._rendezvous import RendezvousConnector
|
||||
from ._lister import Lister
|
||||
from ._allocator import Allocator
|
||||
from ._input import Input
|
||||
from ._code import Code
|
||||
from ._terminator import Terminator
|
||||
from ._wordlist import PGPWordList
|
||||
from .errors import (ServerError, LonelyError, WrongPasswordError,
|
||||
KeyFormatError, OnlyOneCodeError, _UnknownPhaseError,
|
||||
WelcomeError)
|
||||
from .util import bytes_to_dict
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.IBoss)
|
||||
class Boss(object):
|
||||
_W = attrib()
|
||||
_side = attrib(validator=instance_of(type(u"")))
|
||||
_url = attrib(validator=instance_of(type(u"")))
|
||||
_appid = attrib(validator=instance_of(type(u"")))
|
||||
_versions = attrib(validator=instance_of(dict))
|
||||
_welcome_handler = attrib() # TODO: validator: callable
|
||||
_reactor = attrib()
|
||||
_journal = attrib(validator=provides(_interfaces.IJournal))
|
||||
_tor_manager = attrib() # TODO: ITorManager or None
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
self._build_workers()
|
||||
self._init_other_state()
|
||||
|
||||
def _build_workers(self):
|
||||
self._N = Nameplate()
|
||||
self._M = Mailbox(self._side)
|
||||
self._S = Send(self._side, self._timing)
|
||||
self._O = Order(self._side, self._timing)
|
||||
self._K = Key(self._appid, self._versions, self._side, self._timing)
|
||||
self._R = Receive(self._side, self._timing)
|
||||
self._RC = RendezvousConnector(self._url, self._appid, self._side,
|
||||
self._reactor, self._journal,
|
||||
self._tor_manager, self._timing)
|
||||
self._L = Lister(self._timing)
|
||||
self._A = Allocator(self._timing)
|
||||
self._I = Input(self._timing)
|
||||
self._C = Code(self._timing)
|
||||
self._T = Terminator()
|
||||
|
||||
self._N.wire(self._M, self._I, self._RC, self._T)
|
||||
self._M.wire(self._N, self._RC, self._O, self._T)
|
||||
self._S.wire(self._M)
|
||||
self._O.wire(self._K, self._R)
|
||||
self._K.wire(self, self._M, self._R)
|
||||
self._R.wire(self, self._S)
|
||||
self._RC.wire(self, self._N, self._M, self._A, self._L, self._T)
|
||||
self._L.wire(self._RC, self._I)
|
||||
self._A.wire(self._RC, self._C)
|
||||
self._I.wire(self._C, self._L)
|
||||
self._C.wire(self, self._A, self._N, self._K, self._I)
|
||||
self._T.wire(self, self._RC, self._N, self._M)
|
||||
|
||||
def _init_other_state(self):
|
||||
self._did_start_code = False
|
||||
self._next_tx_phase = 0
|
||||
self._next_rx_phase = 0
|
||||
self._rx_phases = {} # phase -> plaintext
|
||||
|
||||
self._result = "empty"
|
||||
|
||||
# these methods are called from outside
|
||||
def start(self):
|
||||
self._RC.start()
|
||||
|
||||
def _set_trace(self, client_name, which, file):
|
||||
names = {"B": self, "N": self._N, "M": self._M, "S": self._S,
|
||||
"O": self._O, "K": self._K, "SK": self._K._SK, "R": self._R,
|
||||
"RC": self._RC, "L": self._L, "C": self._C,
|
||||
"T": self._T}
|
||||
for machine in which.split():
|
||||
def tracer(old_state, input, new_state, output, machine=machine):
|
||||
if output is None:
|
||||
if new_state:
|
||||
print("%s.%s[%s].%s -> [%s]" %
|
||||
(client_name, machine, old_state, input,
|
||||
new_state), file=file)
|
||||
else:
|
||||
# the RendezvousConnector emits message events as if
|
||||
# they were state transitions, except that old_state
|
||||
# and new_state are empty strings. "input" is one of
|
||||
# R.connected, R.rx(type phase+side), R.tx(type
|
||||
# phase), R.lost .
|
||||
print("%s.%s.%s" % (client_name, machine, input),
|
||||
file=file)
|
||||
else:
|
||||
if new_state:
|
||||
print(" %s.%s.%s()" % (client_name, machine, output),
|
||||
file=file)
|
||||
file.flush()
|
||||
names[machine].set_trace(tracer)
|
||||
|
||||
## def serialize(self):
|
||||
## raise NotImplemented
|
||||
|
||||
# and these are the state-machine transition functions, which don't take
|
||||
# args
|
||||
@m.state(initial=True)
|
||||
def S0_empty(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1_lonely(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S2_happy(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S3_closing(self): pass # pragma: no cover
|
||||
@m.state(terminal=True)
|
||||
def S4_closed(self): pass # pragma: no cover
|
||||
|
||||
# from the Wormhole
|
||||
|
||||
# input/allocate/set_code are regular methods, not state-transition
|
||||
# inputs. We expect them to be called just after initialization, while
|
||||
# we're in the S0_empty state. You must call exactly one of them, and the
|
||||
# call must happen while we're in S0_empty, which makes them good
|
||||
# candiates for being a proper @m.input, but set_code() will immediately
|
||||
# (reentrantly) cause self.got_code() to be fired, which is messy. These
|
||||
# are all passthroughs to the Code machine, so one alternative would be
|
||||
# to have Wormhole call Code.{input,allocate,set_code} instead, but that
|
||||
# would require the Wormhole to be aware of Code (whereas right now
|
||||
# Wormhole only knows about this Boss instance, and everything else is
|
||||
# hidden away).
|
||||
def input_code(self):
|
||||
if self._did_start_code:
|
||||
raise OnlyOneCodeError()
|
||||
self._did_start_code = True
|
||||
return self._C.input_code()
|
||||
def allocate_code(self, code_length):
|
||||
if self._did_start_code:
|
||||
raise OnlyOneCodeError()
|
||||
self._did_start_code = True
|
||||
wl = PGPWordList()
|
||||
self._C.allocate_code(code_length, wl)
|
||||
def set_code(self, code):
|
||||
if ' ' in code:
|
||||
raise KeyFormatError("code (%s) contains spaces." % code)
|
||||
if self._did_start_code:
|
||||
raise OnlyOneCodeError()
|
||||
self._did_start_code = True
|
||||
self._C.set_code(code)
|
||||
|
||||
@m.input()
|
||||
def send(self, plaintext): pass
|
||||
@m.input()
|
||||
def close(self): pass
|
||||
|
||||
# from RendezvousConnector:
|
||||
# * "rx_welcome" is the Welcome message, which might signal an error, or
|
||||
# our welcome_handler might signal one
|
||||
# * "rx_error" is error message from the server (probably because of
|
||||
# something we said badly, or due to CrowdedError)
|
||||
# * "error" is when an exception happened while it tried to deliver
|
||||
# something else
|
||||
def rx_welcome(self, welcome):
|
||||
try:
|
||||
if "error" in welcome:
|
||||
raise WelcomeError(welcome["error"])
|
||||
# TODO: it'd be nice to not call the handler when we're in
|
||||
# S3_closing or S4_closed states. I tried to implement this with
|
||||
# rx_Welcome as an @input, but in the error case I'd be
|
||||
# delivering a new input (rx_error or something) while in the
|
||||
# middle of processing the rx_welcome input, and I wasn't sure
|
||||
# Automat would handle that correctly.
|
||||
self._welcome_handler(welcome) # can raise WelcomeError too
|
||||
except WelcomeError as welcome_error:
|
||||
self.rx_unwelcome(welcome_error)
|
||||
@m.input()
|
||||
def rx_unwelcome(self, welcome_error): pass
|
||||
@m.input()
|
||||
def rx_error(self, errmsg, orig): pass
|
||||
@m.input()
|
||||
def error(self, err): pass
|
||||
|
||||
# from Code (provoked by input/allocate/set_code)
|
||||
@m.input()
|
||||
def got_code(self, code): pass
|
||||
|
||||
# Key sends (got_key, scared)
|
||||
# Receive sends (got_message, happy, got_verifier, scared)
|
||||
@m.input()
|
||||
def happy(self): pass
|
||||
@m.input()
|
||||
def scared(self): pass
|
||||
|
||||
def got_message(self, phase, plaintext):
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(plaintext, type(b"")), type(plaintext)
|
||||
if phase == "version":
|
||||
self._got_version(plaintext)
|
||||
elif re.search(r'^\d+$', phase):
|
||||
self._got_phase(int(phase), plaintext)
|
||||
else:
|
||||
# Ignore unrecognized phases, for forwards-compatibility. Use
|
||||
# log.err so tests will catch surprises.
|
||||
log.err(_UnknownPhaseError("received unknown phase '%s'" % phase))
|
||||
@m.input()
|
||||
def _got_version(self, plaintext): pass
|
||||
@m.input()
|
||||
def _got_phase(self, phase, plaintext): pass
|
||||
@m.input()
|
||||
def got_key(self, key): pass
|
||||
@m.input()
|
||||
def got_verifier(self, verifier): pass
|
||||
|
||||
# Terminator sends closed
|
||||
@m.input()
|
||||
def closed(self): pass
|
||||
|
||||
@m.output()
|
||||
def do_got_code(self, code):
|
||||
self._W.got_code(code)
|
||||
@m.output()
|
||||
def process_version(self, plaintext):
|
||||
# most of this is wormhole-to-wormhole, ignored for now
|
||||
# in the future, this is how Dilation is signalled
|
||||
self._their_versions = bytes_to_dict(plaintext)
|
||||
# but this part is app-to-app
|
||||
app_versions = self._their_versions.get("app_versions", {})
|
||||
self._W.got_version(app_versions)
|
||||
|
||||
@m.output()
|
||||
def S_send(self, plaintext):
|
||||
assert isinstance(plaintext, type(b"")), type(plaintext)
|
||||
phase = self._next_tx_phase
|
||||
self._next_tx_phase += 1
|
||||
self._S.send("%d" % phase, plaintext)
|
||||
|
||||
@m.output()
|
||||
def close_unwelcome(self, welcome_error):
|
||||
#assert isinstance(err, WelcomeError)
|
||||
self._result = welcome_error
|
||||
self._T.close("unwelcome")
|
||||
@m.output()
|
||||
def close_error(self, errmsg, orig):
|
||||
self._result = ServerError(errmsg)
|
||||
self._T.close("errory")
|
||||
@m.output()
|
||||
def close_scared(self):
|
||||
self._result = WrongPasswordError()
|
||||
self._T.close("scary")
|
||||
@m.output()
|
||||
def close_lonely(self):
|
||||
self._result = LonelyError()
|
||||
self._T.close("lonely")
|
||||
@m.output()
|
||||
def close_happy(self):
|
||||
self._result = "happy"
|
||||
self._T.close("happy")
|
||||
|
||||
@m.output()
|
||||
def W_got_key(self, key):
|
||||
self._W.got_key(key)
|
||||
@m.output()
|
||||
def W_got_verifier(self, verifier):
|
||||
self._W.got_verifier(verifier)
|
||||
@m.output()
|
||||
def W_received(self, phase, plaintext):
|
||||
assert isinstance(phase, six.integer_types), type(phase)
|
||||
# we call Wormhole.received() in strict phase order, with no gaps
|
||||
self._rx_phases[phase] = plaintext
|
||||
while self._next_rx_phase in self._rx_phases:
|
||||
self._W.received(self._rx_phases.pop(self._next_rx_phase))
|
||||
self._next_rx_phase += 1
|
||||
|
||||
@m.output()
|
||||
def W_close_with_error(self, err):
|
||||
self._result = err # exception
|
||||
self._W.closed(self._result)
|
||||
|
||||
@m.output()
|
||||
def W_closed(self):
|
||||
# result is either "happy" or a WormholeError of some sort
|
||||
self._W.closed(self._result)
|
||||
|
||||
S0_empty.upon(close, enter=S3_closing, outputs=[close_lonely])
|
||||
S0_empty.upon(send, enter=S0_empty, outputs=[S_send])
|
||||
S0_empty.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome])
|
||||
S0_empty.upon(got_code, enter=S1_lonely, outputs=[do_got_code])
|
||||
S0_empty.upon(rx_error, enter=S3_closing, outputs=[close_error])
|
||||
S0_empty.upon(error, enter=S4_closed, outputs=[W_close_with_error])
|
||||
|
||||
S1_lonely.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome])
|
||||
S1_lonely.upon(happy, enter=S2_happy, outputs=[])
|
||||
S1_lonely.upon(scared, enter=S3_closing, outputs=[close_scared])
|
||||
S1_lonely.upon(close, enter=S3_closing, outputs=[close_lonely])
|
||||
S1_lonely.upon(send, enter=S1_lonely, outputs=[S_send])
|
||||
S1_lonely.upon(got_key, enter=S1_lonely, outputs=[W_got_key])
|
||||
S1_lonely.upon(rx_error, enter=S3_closing, outputs=[close_error])
|
||||
S1_lonely.upon(error, enter=S4_closed, outputs=[W_close_with_error])
|
||||
|
||||
S2_happy.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome])
|
||||
S2_happy.upon(got_verifier, enter=S2_happy, outputs=[W_got_verifier])
|
||||
S2_happy.upon(_got_phase, enter=S2_happy, outputs=[W_received])
|
||||
S2_happy.upon(_got_version, enter=S2_happy, outputs=[process_version])
|
||||
S2_happy.upon(scared, enter=S3_closing, outputs=[close_scared])
|
||||
S2_happy.upon(close, enter=S3_closing, outputs=[close_happy])
|
||||
S2_happy.upon(send, enter=S2_happy, outputs=[S_send])
|
||||
S2_happy.upon(rx_error, enter=S3_closing, outputs=[close_error])
|
||||
S2_happy.upon(error, enter=S4_closed, outputs=[W_close_with_error])
|
||||
|
||||
S3_closing.upon(rx_unwelcome, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(rx_error, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(got_verifier, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(_got_phase, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(_got_version, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(happy, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(scared, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(close, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(send, enter=S3_closing, outputs=[])
|
||||
S3_closing.upon(closed, enter=S4_closed, outputs=[W_closed])
|
||||
S3_closing.upon(error, enter=S4_closed, outputs=[W_close_with_error])
|
||||
|
||||
S4_closed.upon(rx_unwelcome, enter=S4_closed, outputs=[])
|
||||
S4_closed.upon(got_verifier, enter=S4_closed, outputs=[])
|
||||
S4_closed.upon(_got_phase, enter=S4_closed, outputs=[])
|
||||
S4_closed.upon(_got_version, enter=S4_closed, outputs=[])
|
||||
S4_closed.upon(happy, enter=S4_closed, outputs=[])
|
||||
S4_closed.upon(scared, enter=S4_closed, outputs=[])
|
||||
S4_closed.upon(close, enter=S4_closed, outputs=[])
|
||||
S4_closed.upon(send, enter=S4_closed, outputs=[])
|
||||
S4_closed.upon(error, enter=S4_closed, outputs=[])
|
90
src/wormhole/_code.py
Normal file
90
src/wormhole/_code.py
Normal file
|
@ -0,0 +1,90 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
|
||||
def first(outputs):
|
||||
return list(outputs)[0]
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.ICode)
|
||||
class Code(object):
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def wire(self, boss, allocator, nameplate, key, input):
|
||||
self._B = _interfaces.IBoss(boss)
|
||||
self._A = _interfaces.IAllocator(allocator)
|
||||
self._N = _interfaces.INameplate(nameplate)
|
||||
self._K = _interfaces.IKey(key)
|
||||
self._I = _interfaces.IInput(input)
|
||||
|
||||
@m.state(initial=True)
|
||||
def S0_idle(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1_inputting_nameplate(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S2_inputting_words(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S3_allocating(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S4_known(self): pass # pragma: no cover
|
||||
|
||||
# from App
|
||||
@m.input()
|
||||
def allocate_code(self, length, wordlist): pass
|
||||
@m.input()
|
||||
def input_code(self): pass
|
||||
@m.input()
|
||||
def set_code(self, code): pass
|
||||
|
||||
# from Allocator
|
||||
@m.input()
|
||||
def allocated(self, nameplate, code): pass
|
||||
|
||||
# from Input
|
||||
@m.input()
|
||||
def got_nameplate(self, nameplate): pass
|
||||
@m.input()
|
||||
def finished_input(self, code): pass
|
||||
|
||||
@m.output()
|
||||
def do_set_code(self, code):
|
||||
nameplate = code.split("-", 2)[0]
|
||||
self._N.set_nameplate(nameplate)
|
||||
self._B.got_code(code)
|
||||
self._K.got_code(code)
|
||||
|
||||
@m.output()
|
||||
def do_start_input(self):
|
||||
return self._I.start()
|
||||
@m.output()
|
||||
def do_middle_input(self, nameplate):
|
||||
self._N.set_nameplate(nameplate)
|
||||
@m.output()
|
||||
def do_finish_input(self, code):
|
||||
self._B.got_code(code)
|
||||
self._K.got_code(code)
|
||||
|
||||
@m.output()
|
||||
def do_start_allocate(self, length, wordlist):
|
||||
self._A.allocate(length, wordlist)
|
||||
@m.output()
|
||||
def do_finish_allocate(self, nameplate, code):
|
||||
assert code.startswith(nameplate+"-"), (nameplate, code)
|
||||
self._N.set_nameplate(nameplate)
|
||||
self._B.got_code(code)
|
||||
self._K.got_code(code)
|
||||
|
||||
S0_idle.upon(set_code, enter=S4_known, outputs=[do_set_code])
|
||||
S0_idle.upon(input_code, enter=S1_inputting_nameplate,
|
||||
outputs=[do_start_input], collector=first)
|
||||
S1_inputting_nameplate.upon(got_nameplate, enter=S2_inputting_words,
|
||||
outputs=[do_middle_input])
|
||||
S2_inputting_words.upon(finished_input, enter=S4_known,
|
||||
outputs=[do_finish_input])
|
||||
S0_idle.upon(allocate_code, enter=S3_allocating, outputs=[do_start_allocate])
|
||||
S3_allocating.upon(allocated, enter=S4_known, outputs=[do_finish_allocate])
|
240
src/wormhole/_input.py
Normal file
240
src/wormhole/_input.py
Normal file
|
@ -0,0 +1,240 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides
|
||||
from twisted.internet import defer
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces, errors
|
||||
|
||||
def first(outputs):
|
||||
return list(outputs)[0]
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.IInput)
|
||||
class Input(object):
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
self._all_nameplates = set()
|
||||
self._nameplate = None
|
||||
self._wordlist = None
|
||||
self._wordlist_waiters = []
|
||||
|
||||
def wire(self, code, lister):
|
||||
self._C = _interfaces.ICode(code)
|
||||
self._L = _interfaces.ILister(lister)
|
||||
|
||||
def when_wordlist_is_available(self):
|
||||
if self._wordlist:
|
||||
return defer.succeed(None)
|
||||
d = defer.Deferred()
|
||||
self._wordlist_waiters.append(d)
|
||||
return d
|
||||
|
||||
@m.state(initial=True)
|
||||
def S0_idle(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1_typing_nameplate(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S2_typing_code_no_wordlist(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S3_typing_code_yes_wordlist(self): pass # pragma: no cover
|
||||
@m.state(terminal=True)
|
||||
def S4_done(self): pass # pragma: no cover
|
||||
|
||||
# from Code
|
||||
@m.input()
|
||||
def start(self): pass
|
||||
|
||||
# from Lister
|
||||
@m.input()
|
||||
def got_nameplates(self, all_nameplates): pass
|
||||
|
||||
# from Nameplate
|
||||
@m.input()
|
||||
def got_wordlist(self, wordlist): pass
|
||||
|
||||
# API provided to app as ICodeInputHelper
|
||||
@m.input()
|
||||
def refresh_nameplates(self): pass
|
||||
@m.input()
|
||||
def get_nameplate_completions(self, prefix): pass
|
||||
@m.input()
|
||||
def choose_nameplate(self, nameplate): pass
|
||||
@m.input()
|
||||
def get_word_completions(self, prefix): pass
|
||||
@m.input()
|
||||
def choose_words(self, words): pass
|
||||
|
||||
@m.output()
|
||||
def do_start(self):
|
||||
self._L.refresh()
|
||||
return Helper(self)
|
||||
@m.output()
|
||||
def do_refresh(self):
|
||||
self._L.refresh()
|
||||
@m.output()
|
||||
def record_nameplates(self, all_nameplates):
|
||||
# we get a set of nameplate id strings
|
||||
self._all_nameplates = all_nameplates
|
||||
@m.output()
|
||||
def _get_nameplate_completions(self, prefix):
|
||||
completions = set()
|
||||
for nameplate in self._all_nameplates:
|
||||
if nameplate.startswith(prefix):
|
||||
# TODO: it's a little weird that Input is responsible for the
|
||||
# hyphen on nameplates, but WordList owns it for words
|
||||
completions.add(nameplate+"-")
|
||||
return completions
|
||||
@m.output()
|
||||
def record_all_nameplates(self, nameplate):
|
||||
self._nameplate = nameplate
|
||||
self._C.got_nameplate(nameplate)
|
||||
@m.output()
|
||||
def record_wordlist(self, wordlist):
|
||||
from ._rlcompleter import debug
|
||||
debug(" -record_wordlist")
|
||||
self._wordlist = wordlist
|
||||
@m.output()
|
||||
def notify_wordlist_waiters(self, wordlist):
|
||||
while self._wordlist_waiters:
|
||||
d = self._wordlist_waiters.pop()
|
||||
d.callback(None)
|
||||
|
||||
@m.output()
|
||||
def no_word_completions(self, prefix):
|
||||
return set()
|
||||
@m.output()
|
||||
def _get_word_completions(self, prefix):
|
||||
assert self._wordlist
|
||||
return self._wordlist.get_completions(prefix)
|
||||
|
||||
@m.output()
|
||||
def raise_must_choose_nameplate1(self, prefix):
|
||||
raise errors.MustChooseNameplateFirstError()
|
||||
@m.output()
|
||||
def raise_must_choose_nameplate2(self, words):
|
||||
raise errors.MustChooseNameplateFirstError()
|
||||
@m.output()
|
||||
def raise_already_chose_nameplate1(self):
|
||||
raise errors.AlreadyChoseNameplateError()
|
||||
@m.output()
|
||||
def raise_already_chose_nameplate2(self, prefix):
|
||||
raise errors.AlreadyChoseNameplateError()
|
||||
@m.output()
|
||||
def raise_already_chose_nameplate3(self, nameplate):
|
||||
raise errors.AlreadyChoseNameplateError()
|
||||
@m.output()
|
||||
def raise_already_chose_words1(self, prefix):
|
||||
raise errors.AlreadyChoseWordsError()
|
||||
@m.output()
|
||||
def raise_already_chose_words2(self, words):
|
||||
raise errors.AlreadyChoseWordsError()
|
||||
|
||||
@m.output()
|
||||
def do_words(self, words):
|
||||
code = self._nameplate + "-" + words
|
||||
self._C.finished_input(code)
|
||||
|
||||
S0_idle.upon(start, enter=S1_typing_nameplate,
|
||||
outputs=[do_start], collector=first)
|
||||
# wormholes that don't use input_code (i.e. they use allocate_code or
|
||||
# generate_code) will never start() us, but Nameplate will give us a
|
||||
# wordlist anyways (as soon as the nameplate is claimed), so handle it.
|
||||
S0_idle.upon(got_wordlist, enter=S0_idle, outputs=[record_wordlist,
|
||||
notify_wordlist_waiters])
|
||||
S1_typing_nameplate.upon(got_nameplates, enter=S1_typing_nameplate,
|
||||
outputs=[record_nameplates])
|
||||
# but wormholes that *do* use input_code should not get got_wordlist
|
||||
# until after we tell Code that we got_nameplate, which is the earliest
|
||||
# it can be claimed
|
||||
S1_typing_nameplate.upon(refresh_nameplates, enter=S1_typing_nameplate,
|
||||
outputs=[do_refresh])
|
||||
S1_typing_nameplate.upon(get_nameplate_completions,
|
||||
enter=S1_typing_nameplate,
|
||||
outputs=[_get_nameplate_completions],
|
||||
collector=first)
|
||||
S1_typing_nameplate.upon(choose_nameplate, enter=S2_typing_code_no_wordlist,
|
||||
outputs=[record_all_nameplates])
|
||||
S1_typing_nameplate.upon(get_word_completions,
|
||||
enter=S1_typing_nameplate,
|
||||
outputs=[raise_must_choose_nameplate1])
|
||||
S1_typing_nameplate.upon(choose_words, enter=S1_typing_nameplate,
|
||||
outputs=[raise_must_choose_nameplate2])
|
||||
|
||||
S2_typing_code_no_wordlist.upon(got_nameplates,
|
||||
enter=S2_typing_code_no_wordlist, outputs=[])
|
||||
S2_typing_code_no_wordlist.upon(got_wordlist,
|
||||
enter=S3_typing_code_yes_wordlist,
|
||||
outputs=[record_wordlist,
|
||||
notify_wordlist_waiters])
|
||||
S2_typing_code_no_wordlist.upon(refresh_nameplates,
|
||||
enter=S2_typing_code_no_wordlist,
|
||||
outputs=[raise_already_chose_nameplate1])
|
||||
S2_typing_code_no_wordlist.upon(get_nameplate_completions,
|
||||
enter=S2_typing_code_no_wordlist,
|
||||
outputs=[raise_already_chose_nameplate2])
|
||||
S2_typing_code_no_wordlist.upon(choose_nameplate,
|
||||
enter=S2_typing_code_no_wordlist,
|
||||
outputs=[raise_already_chose_nameplate3])
|
||||
S2_typing_code_no_wordlist.upon(get_word_completions,
|
||||
enter=S2_typing_code_no_wordlist,
|
||||
outputs=[no_word_completions],
|
||||
collector=first)
|
||||
S2_typing_code_no_wordlist.upon(choose_words, enter=S4_done,
|
||||
outputs=[do_words])
|
||||
|
||||
S3_typing_code_yes_wordlist.upon(got_nameplates,
|
||||
enter=S3_typing_code_yes_wordlist,
|
||||
outputs=[])
|
||||
# got_wordlist: should never happen
|
||||
S3_typing_code_yes_wordlist.upon(refresh_nameplates,
|
||||
enter=S3_typing_code_yes_wordlist,
|
||||
outputs=[raise_already_chose_nameplate1])
|
||||
S3_typing_code_yes_wordlist.upon(get_nameplate_completions,
|
||||
enter=S3_typing_code_yes_wordlist,
|
||||
outputs=[raise_already_chose_nameplate2])
|
||||
S3_typing_code_yes_wordlist.upon(choose_nameplate,
|
||||
enter=S3_typing_code_yes_wordlist,
|
||||
outputs=[raise_already_chose_nameplate3])
|
||||
S3_typing_code_yes_wordlist.upon(get_word_completions,
|
||||
enter=S3_typing_code_yes_wordlist,
|
||||
outputs=[_get_word_completions],
|
||||
collector=first)
|
||||
S3_typing_code_yes_wordlist.upon(choose_words, enter=S4_done,
|
||||
outputs=[do_words])
|
||||
|
||||
S4_done.upon(got_nameplates, enter=S4_done, outputs=[])
|
||||
S4_done.upon(got_wordlist, enter=S4_done, outputs=[])
|
||||
S4_done.upon(refresh_nameplates,
|
||||
enter=S4_done,
|
||||
outputs=[raise_already_chose_nameplate1])
|
||||
S4_done.upon(get_nameplate_completions,
|
||||
enter=S4_done,
|
||||
outputs=[raise_already_chose_nameplate2])
|
||||
S4_done.upon(choose_nameplate, enter=S4_done,
|
||||
outputs=[raise_already_chose_nameplate3])
|
||||
S4_done.upon(get_word_completions, enter=S4_done,
|
||||
outputs=[raise_already_chose_words1])
|
||||
S4_done.upon(choose_words, enter=S4_done,
|
||||
outputs=[raise_already_chose_words2])
|
||||
|
||||
# we only expose the Helper to application code, not _Input
|
||||
@attrs
|
||||
class Helper(object):
|
||||
_input = attrib()
|
||||
|
||||
def refresh_nameplates(self):
|
||||
self._input.refresh_nameplates()
|
||||
def get_nameplate_completions(self, prefix):
|
||||
return self._input.get_nameplate_completions(prefix)
|
||||
def choose_nameplate(self, nameplate):
|
||||
self._input.choose_nameplate(nameplate)
|
||||
def when_wordlist_is_available(self):
|
||||
return self._input.when_wordlist_is_available()
|
||||
def get_word_completions(self, prefix):
|
||||
return self._input.get_word_completions(prefix)
|
||||
def choose_words(self, words):
|
||||
self._input.choose_words(words)
|
45
src/wormhole/_interfaces.py
Normal file
45
src/wormhole/_interfaces.py
Normal file
|
@ -0,0 +1,45 @@
|
|||
from zope.interface import Interface
|
||||
|
||||
class IWormhole(Interface):
|
||||
pass
|
||||
class IBoss(Interface):
|
||||
pass
|
||||
class INameplate(Interface):
|
||||
pass
|
||||
class IMailbox(Interface):
|
||||
pass
|
||||
class ISend(Interface):
|
||||
pass
|
||||
class IOrder(Interface):
|
||||
pass
|
||||
class IKey(Interface):
|
||||
pass
|
||||
class IReceive(Interface):
|
||||
pass
|
||||
class IRendezvousConnector(Interface):
|
||||
pass
|
||||
class ILister(Interface):
|
||||
pass
|
||||
class ICode(Interface):
|
||||
pass
|
||||
class IInput(Interface):
|
||||
pass
|
||||
class IAllocator(Interface):
|
||||
pass
|
||||
class ITerminator(Interface):
|
||||
pass
|
||||
|
||||
class ITiming(Interface):
|
||||
pass
|
||||
class ITorManager(Interface):
|
||||
pass
|
||||
class IWordlist(Interface):
|
||||
def choose_words(length):
|
||||
"""Randomly select LENGTH words, join them with hyphens, return the
|
||||
result."""
|
||||
def get_completions(prefix):
|
||||
"""Return a list of all suffixes that could complete the given
|
||||
prefix."""
|
||||
|
||||
class IJournal(Interface): # TODO: this needs to be public
|
||||
pass
|
178
src/wormhole/_key.py
Normal file
178
src/wormhole/_key.py
Normal file
|
@ -0,0 +1,178 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from hashlib import sha256
|
||||
import six
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides, instance_of
|
||||
from spake2 import SPAKE2_Symmetric
|
||||
from hkdf import Hkdf
|
||||
from nacl.secret import SecretBox
|
||||
from nacl.exceptions import CryptoError
|
||||
from nacl import utils
|
||||
from automat import MethodicalMachine
|
||||
from .util import (to_bytes, bytes_to_hexstr, hexstr_to_bytes,
|
||||
bytes_to_dict, dict_to_bytes)
|
||||
from . import _interfaces
|
||||
CryptoError
|
||||
__all__ = ["derive_key", "derive_phase_key", "CryptoError",
|
||||
"Key"]
|
||||
|
||||
def HKDF(skm, outlen, salt=None, CTXinfo=b""):
|
||||
return Hkdf(salt, skm).expand(CTXinfo, outlen)
|
||||
|
||||
def derive_key(key, purpose, length=SecretBox.KEY_SIZE):
|
||||
if not isinstance(key, type(b"")): raise TypeError(type(key))
|
||||
if not isinstance(purpose, type(b"")): raise TypeError(type(purpose))
|
||||
if not isinstance(length, six.integer_types): raise TypeError(type(length))
|
||||
return HKDF(key, length, CTXinfo=purpose)
|
||||
|
||||
def derive_phase_key(key, side, phase):
|
||||
assert isinstance(side, type("")), type(side)
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
side_bytes = side.encode("ascii")
|
||||
phase_bytes = phase.encode("ascii")
|
||||
purpose = (b"wormhole:phase:"
|
||||
+ sha256(side_bytes).digest()
|
||||
+ sha256(phase_bytes).digest())
|
||||
return derive_key(key, purpose)
|
||||
|
||||
def decrypt_data(key, encrypted):
|
||||
assert isinstance(key, type(b"")), type(key)
|
||||
assert isinstance(encrypted, type(b"")), type(encrypted)
|
||||
assert len(key) == SecretBox.KEY_SIZE, len(key)
|
||||
box = SecretBox(key)
|
||||
data = box.decrypt(encrypted)
|
||||
return data
|
||||
|
||||
def encrypt_data(key, plaintext):
|
||||
assert isinstance(key, type(b"")), type(key)
|
||||
assert isinstance(plaintext, type(b"")), type(plaintext)
|
||||
assert len(key) == SecretBox.KEY_SIZE, len(key)
|
||||
box = SecretBox(key)
|
||||
nonce = utils.random(SecretBox.NONCE_SIZE)
|
||||
return box.encrypt(plaintext, nonce)
|
||||
|
||||
# the Key we expose to callers (Boss, Ordering) is responsible for sorting
|
||||
# the two messages (got_code and got_pake), then delivering them to
|
||||
# _SortedKey in the right order.
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.IKey)
|
||||
class Key(object):
|
||||
_appid = attrib(validator=instance_of(type(u"")))
|
||||
_versions = attrib(validator=instance_of(dict))
|
||||
_side = attrib(validator=instance_of(type(u"")))
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
self._SK = _SortedKey(self._appid, self._versions, self._side,
|
||||
self._timing)
|
||||
self._debug_pake_stashed = False # for tests
|
||||
|
||||
def wire(self, boss, mailbox, receive):
|
||||
self._SK.wire(boss, mailbox, receive)
|
||||
|
||||
@m.state(initial=True)
|
||||
def S00(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S01(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S10(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S11(self): pass # pragma: no cover
|
||||
|
||||
@m.input()
|
||||
def got_code(self, code): pass
|
||||
@m.input()
|
||||
def got_pake(self, body): pass
|
||||
|
||||
@m.output()
|
||||
def stash_pake(self, body):
|
||||
self._pake = body
|
||||
self._debug_pake_stashed = True
|
||||
@m.output()
|
||||
def deliver_code(self, code):
|
||||
self._SK.got_code(code)
|
||||
@m.output()
|
||||
def deliver_pake(self, body):
|
||||
self._SK.got_pake(body)
|
||||
@m.output()
|
||||
def deliver_code_and_stashed_pake(self, code):
|
||||
self._SK.got_code(code)
|
||||
self._SK.got_pake(self._pake)
|
||||
|
||||
S00.upon(got_code, enter=S10, outputs=[deliver_code])
|
||||
S10.upon(got_pake, enter=S11, outputs=[deliver_pake])
|
||||
S00.upon(got_pake, enter=S01, outputs=[stash_pake])
|
||||
S01.upon(got_code, enter=S11, outputs=[deliver_code_and_stashed_pake])
|
||||
|
||||
@attrs
|
||||
class _SortedKey(object):
|
||||
_appid = attrib(validator=instance_of(type(u"")))
|
||||
_versions = attrib(validator=instance_of(dict))
|
||||
_side = attrib(validator=instance_of(type(u"")))
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def wire(self, boss, mailbox, receive):
|
||||
self._B = _interfaces.IBoss(boss)
|
||||
self._M = _interfaces.IMailbox(mailbox)
|
||||
self._R = _interfaces.IReceive(receive)
|
||||
|
||||
@m.state(initial=True)
|
||||
def S0_know_nothing(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1_know_code(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S2_know_key(self): pass # pragma: no cover
|
||||
@m.state(terminal=True)
|
||||
def S3_scared(self): pass # pragma: no cover
|
||||
|
||||
# from Boss
|
||||
@m.input()
|
||||
def got_code(self, code): pass
|
||||
|
||||
# from Ordering
|
||||
def got_pake(self, body):
|
||||
assert isinstance(body, type(b"")), type(body)
|
||||
payload = bytes_to_dict(body)
|
||||
if "pake_v1" in payload:
|
||||
self.got_pake_good(hexstr_to_bytes(payload["pake_v1"]))
|
||||
else:
|
||||
self.got_pake_bad()
|
||||
@m.input()
|
||||
def got_pake_good(self, msg2): pass
|
||||
@m.input()
|
||||
def got_pake_bad(self): pass
|
||||
|
||||
@m.output()
|
||||
def build_pake(self, code):
|
||||
with self._timing.add("pake1", waiting="crypto"):
|
||||
self._sp = SPAKE2_Symmetric(to_bytes(code),
|
||||
idSymmetric=to_bytes(self._appid))
|
||||
msg1 = self._sp.start()
|
||||
body = dict_to_bytes({"pake_v1": bytes_to_hexstr(msg1)})
|
||||
self._M.add_message("pake", body)
|
||||
|
||||
@m.output()
|
||||
def scared(self):
|
||||
self._B.scared()
|
||||
@m.output()
|
||||
def compute_key(self, msg2):
|
||||
assert isinstance(msg2, type(b""))
|
||||
with self._timing.add("pake2", waiting="crypto"):
|
||||
key = self._sp.finish(msg2)
|
||||
self._B.got_key(key)
|
||||
phase = "version"
|
||||
data_key = derive_phase_key(key, self._side, phase)
|
||||
plaintext = dict_to_bytes(self._versions)
|
||||
encrypted = encrypt_data(data_key, plaintext)
|
||||
self._M.add_message(phase, encrypted)
|
||||
self._R.got_key(key)
|
||||
|
||||
S0_know_nothing.upon(got_code, enter=S1_know_code, outputs=[build_pake])
|
||||
S1_know_code.upon(got_pake_good, enter=S2_know_key, outputs=[compute_key])
|
||||
S1_know_code.upon(got_pake_bad, enter=S3_scared, outputs=[scared])
|
73
src/wormhole/_lister.py
Normal file
73
src/wormhole/_lister.py
Normal file
|
@ -0,0 +1,73 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.ILister)
|
||||
class Lister(object):
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def wire(self, rendezvous_connector, input):
|
||||
self._RC = _interfaces.IRendezvousConnector(rendezvous_connector)
|
||||
self._I = _interfaces.IInput(input)
|
||||
|
||||
# Ideally, each API request would spawn a new "list_nameplates" message
|
||||
# to the server, so the response would be maximally fresh, but that would
|
||||
# require correlating server request+response messages, and the protocol
|
||||
# is intended to be less stateful than that. So we offer a weaker
|
||||
# freshness property: if no server requests are in flight, then a new API
|
||||
# request will provoke a new server request, and the result will be
|
||||
# fresh. But if a server request is already in flight when a second API
|
||||
# request arrives, both requests will be satisfied by the same response.
|
||||
|
||||
@m.state(initial=True)
|
||||
def S0A_idle_disconnected(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1A_wanting_disconnected(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S0B_idle_connected(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1B_wanting_connected(self): pass # pragma: no cover
|
||||
|
||||
@m.input()
|
||||
def connected(self): pass
|
||||
@m.input()
|
||||
def lost(self): pass
|
||||
@m.input()
|
||||
def refresh(self): pass
|
||||
@m.input()
|
||||
def rx_nameplates(self, all_nameplates): pass
|
||||
|
||||
@m.output()
|
||||
def RC_tx_list(self):
|
||||
self._RC.tx_list()
|
||||
@m.output()
|
||||
def I_got_nameplates(self, all_nameplates):
|
||||
# We get a set of nameplate ids. There may be more attributes in the
|
||||
# future: change RendezvousConnector._response_handle_nameplates to
|
||||
# get them
|
||||
self._I.got_nameplates(all_nameplates)
|
||||
|
||||
S0A_idle_disconnected.upon(connected, enter=S0B_idle_connected, outputs=[])
|
||||
S0B_idle_connected.upon(lost, enter=S0A_idle_disconnected, outputs=[])
|
||||
|
||||
S0A_idle_disconnected.upon(refresh,
|
||||
enter=S1A_wanting_disconnected, outputs=[])
|
||||
S1A_wanting_disconnected.upon(refresh,
|
||||
enter=S1A_wanting_disconnected, outputs=[])
|
||||
S1A_wanting_disconnected.upon(connected, enter=S1B_wanting_connected,
|
||||
outputs=[RC_tx_list])
|
||||
S0B_idle_connected.upon(refresh, enter=S1B_wanting_connected,
|
||||
outputs=[RC_tx_list])
|
||||
S0B_idle_connected.upon(rx_nameplates, enter=S0B_idle_connected,
|
||||
outputs=[I_got_nameplates])
|
||||
S1B_wanting_connected.upon(lost, enter=S1A_wanting_disconnected, outputs=[])
|
||||
S1B_wanting_connected.upon(refresh, enter=S1B_wanting_connected,
|
||||
outputs=[RC_tx_list])
|
||||
S1B_wanting_connected.upon(rx_nameplates, enter=S0B_idle_connected,
|
||||
outputs=[I_got_nameplates])
|
195
src/wormhole/_mailbox.py
Normal file
195
src/wormhole/_mailbox.py
Normal file
|
@ -0,0 +1,195 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import instance_of
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.IMailbox)
|
||||
class Mailbox(object):
|
||||
_side = attrib(validator=instance_of(type(u"")))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
self._mailbox = None
|
||||
self._pending_outbound = {}
|
||||
self._processed = set()
|
||||
|
||||
def wire(self, nameplate, rendezvous_connector, ordering, terminator):
|
||||
self._N = _interfaces.INameplate(nameplate)
|
||||
self._RC = _interfaces.IRendezvousConnector(rendezvous_connector)
|
||||
self._O = _interfaces.IOrder(ordering)
|
||||
self._T = _interfaces.ITerminator(terminator)
|
||||
|
||||
# all -A states: not connected
|
||||
# all -B states: yes connected
|
||||
# B states serialize as A, so they deserialize as unconnected
|
||||
|
||||
# S0: know nothing
|
||||
@m.state(initial=True)
|
||||
def S0A(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S0B(self): pass # pragma: no cover
|
||||
|
||||
# S1: mailbox known, not opened
|
||||
@m.state()
|
||||
def S1A(self): pass # pragma: no cover
|
||||
|
||||
# S2: mailbox known, opened
|
||||
# We've definitely tried to open the mailbox at least once, but it must
|
||||
# be re-opened with each connection, because open() is also subscribe()
|
||||
@m.state()
|
||||
def S2A(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S2B(self): pass # pragma: no cover
|
||||
|
||||
# S3: closing
|
||||
@m.state()
|
||||
def S3A(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S3B(self): pass # pragma: no cover
|
||||
|
||||
# S4: closed. We no longer care whether we're connected or not
|
||||
#@m.state()
|
||||
#def S4A(self): pass
|
||||
#@m.state()
|
||||
#def S4B(self): pass
|
||||
@m.state(terminal=True)
|
||||
def S4(self): pass # pragma: no cover
|
||||
S4A = S4
|
||||
S4B = S4
|
||||
|
||||
|
||||
# from Terminator
|
||||
@m.input()
|
||||
def close(self, mood): pass
|
||||
|
||||
# from Nameplate
|
||||
@m.input()
|
||||
def got_mailbox(self, mailbox): pass
|
||||
|
||||
# from RendezvousConnector
|
||||
@m.input()
|
||||
def connected(self): pass
|
||||
@m.input()
|
||||
def lost(self): pass
|
||||
|
||||
def rx_message(self, side, phase, body):
|
||||
assert isinstance(side, type("")), type(side)
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(body, type(b"")), type(body)
|
||||
if side == self._side:
|
||||
self.rx_message_ours(phase, body)
|
||||
else:
|
||||
self.rx_message_theirs(side, phase, body)
|
||||
@m.input()
|
||||
def rx_message_ours(self, phase, body): pass
|
||||
@m.input()
|
||||
def rx_message_theirs(self, side, phase, body): pass
|
||||
@m.input()
|
||||
def rx_closed(self): pass
|
||||
|
||||
# from Send or Key
|
||||
@m.input()
|
||||
def add_message(self, phase, body):
|
||||
pass
|
||||
|
||||
|
||||
@m.output()
|
||||
def record_mailbox(self, mailbox):
|
||||
self._mailbox = mailbox
|
||||
@m.output()
|
||||
def RC_tx_open(self):
|
||||
assert self._mailbox
|
||||
self._RC.tx_open(self._mailbox)
|
||||
@m.output()
|
||||
def queue(self, phase, body):
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(body, type(b"")), (type(body), phase, body)
|
||||
self._pending_outbound[phase] = body
|
||||
@m.output()
|
||||
def record_mailbox_and_RC_tx_open_and_drain(self, mailbox):
|
||||
self._mailbox = mailbox
|
||||
self._RC.tx_open(mailbox)
|
||||
self._drain()
|
||||
@m.output()
|
||||
def drain(self):
|
||||
self._drain()
|
||||
def _drain(self):
|
||||
for phase, body in self._pending_outbound.items():
|
||||
self._RC.tx_add(phase, body)
|
||||
@m.output()
|
||||
def RC_tx_add(self, phase, body):
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(body, type(b"")), type(body)
|
||||
self._RC.tx_add(phase, body)
|
||||
@m.output()
|
||||
def N_release_and_accept(self, side, phase, body):
|
||||
self._N.release()
|
||||
if phase not in self._processed:
|
||||
self._processed.add(phase)
|
||||
self._O.got_message(side, phase, body)
|
||||
@m.output()
|
||||
def RC_tx_close(self):
|
||||
assert self._mood
|
||||
self._RC_tx_close()
|
||||
def _RC_tx_close(self):
|
||||
self._RC.tx_close(self._mailbox, self._mood)
|
||||
|
||||
@m.output()
|
||||
def dequeue(self, phase, body):
|
||||
self._pending_outbound.pop(phase, None)
|
||||
@m.output()
|
||||
def record_mood(self, mood):
|
||||
self._mood = mood
|
||||
@m.output()
|
||||
def record_mood_and_RC_tx_close(self, mood):
|
||||
self._mood = mood
|
||||
self._RC_tx_close()
|
||||
@m.output()
|
||||
def ignore_mood_and_T_mailbox_done(self, mood):
|
||||
self._T.mailbox_done()
|
||||
@m.output()
|
||||
def T_mailbox_done(self):
|
||||
self._T.mailbox_done()
|
||||
|
||||
S0A.upon(connected, enter=S0B, outputs=[])
|
||||
S0A.upon(got_mailbox, enter=S1A, outputs=[record_mailbox])
|
||||
S0A.upon(add_message, enter=S0A, outputs=[queue])
|
||||
S0A.upon(close, enter=S4A, outputs=[ignore_mood_and_T_mailbox_done])
|
||||
S0B.upon(lost, enter=S0A, outputs=[])
|
||||
S0B.upon(add_message, enter=S0B, outputs=[queue])
|
||||
S0B.upon(close, enter=S4B, outputs=[ignore_mood_and_T_mailbox_done])
|
||||
S0B.upon(got_mailbox, enter=S2B,
|
||||
outputs=[record_mailbox_and_RC_tx_open_and_drain])
|
||||
|
||||
S1A.upon(connected, enter=S2B, outputs=[RC_tx_open, drain])
|
||||
S1A.upon(add_message, enter=S1A, outputs=[queue])
|
||||
S1A.upon(close, enter=S4A, outputs=[ignore_mood_and_T_mailbox_done])
|
||||
|
||||
S2A.upon(connected, enter=S2B, outputs=[RC_tx_open, drain])
|
||||
S2A.upon(add_message, enter=S2A, outputs=[queue])
|
||||
S2A.upon(close, enter=S3A, outputs=[record_mood])
|
||||
S2B.upon(lost, enter=S2A, outputs=[])
|
||||
S2B.upon(add_message, enter=S2B, outputs=[queue, RC_tx_add])
|
||||
S2B.upon(rx_message_theirs, enter=S2B, outputs=[N_release_and_accept])
|
||||
S2B.upon(rx_message_ours, enter=S2B, outputs=[dequeue])
|
||||
S2B.upon(close, enter=S3B, outputs=[record_mood_and_RC_tx_close])
|
||||
|
||||
S3A.upon(connected, enter=S3B, outputs=[RC_tx_close])
|
||||
S3B.upon(lost, enter=S3A, outputs=[])
|
||||
S3B.upon(rx_closed, enter=S4B, outputs=[T_mailbox_done])
|
||||
S3B.upon(add_message, enter=S3B, outputs=[])
|
||||
S3B.upon(rx_message_theirs, enter=S3B, outputs=[])
|
||||
S3B.upon(rx_message_ours, enter=S3B, outputs=[])
|
||||
S3B.upon(close, enter=S3B, outputs=[])
|
||||
|
||||
S4A.upon(connected, enter=S4B, outputs=[])
|
||||
S4B.upon(lost, enter=S4A, outputs=[])
|
||||
S4.upon(add_message, enter=S4, outputs=[])
|
||||
S4.upon(rx_message_theirs, enter=S4, outputs=[])
|
||||
S4.upon(rx_message_ours, enter=S4, outputs=[])
|
||||
S4.upon(close, enter=S4, outputs=[])
|
||||
|
153
src/wormhole/_nameplate.py
Normal file
153
src/wormhole/_nameplate.py
Normal file
|
@ -0,0 +1,153 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
from ._wordlist import PGPWordList
|
||||
|
||||
@implementer(_interfaces.INameplate)
|
||||
class Nameplate(object):
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __init__(self):
|
||||
self._nameplate = None
|
||||
|
||||
def wire(self, mailbox, input, rendezvous_connector, terminator):
|
||||
self._M = _interfaces.IMailbox(mailbox)
|
||||
self._I = _interfaces.IInput(input)
|
||||
self._RC = _interfaces.IRendezvousConnector(rendezvous_connector)
|
||||
self._T = _interfaces.ITerminator(terminator)
|
||||
|
||||
# all -A states: not connected
|
||||
# all -B states: yes connected
|
||||
# B states serialize as A, so they deserialize as unconnected
|
||||
|
||||
# S0: know nothing
|
||||
@m.state(initial=True)
|
||||
def S0A(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S0B(self): pass # pragma: no cover
|
||||
|
||||
# S1: nameplate known, never claimed
|
||||
@m.state()
|
||||
def S1A(self): pass # pragma: no cover
|
||||
|
||||
# S2: nameplate known, maybe claimed
|
||||
@m.state()
|
||||
def S2A(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S2B(self): pass # pragma: no cover
|
||||
|
||||
# S3: nameplate claimed
|
||||
@m.state()
|
||||
def S3A(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S3B(self): pass # pragma: no cover
|
||||
|
||||
# S4: maybe released
|
||||
@m.state()
|
||||
def S4A(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S4B(self): pass # pragma: no cover
|
||||
|
||||
# S5: released
|
||||
# we no longer care whether we're connected or not
|
||||
#@m.state()
|
||||
#def S5A(self): pass
|
||||
#@m.state()
|
||||
#def S5B(self): pass
|
||||
@m.state()
|
||||
def S5(self): pass # pragma: no cover
|
||||
S5A = S5
|
||||
S5B = S5
|
||||
|
||||
# from Boss
|
||||
@m.input()
|
||||
def set_nameplate(self, nameplate): pass
|
||||
|
||||
# from Mailbox
|
||||
@m.input()
|
||||
def release(self): pass
|
||||
|
||||
# from Terminator
|
||||
@m.input()
|
||||
def close(self): pass
|
||||
|
||||
# from RendezvousConnector
|
||||
@m.input()
|
||||
def connected(self): pass
|
||||
@m.input()
|
||||
def lost(self): pass
|
||||
|
||||
@m.input()
|
||||
def rx_claimed(self, mailbox): pass
|
||||
@m.input()
|
||||
def rx_released(self): pass
|
||||
|
||||
|
||||
@m.output()
|
||||
def record_nameplate(self, nameplate):
|
||||
self._nameplate = nameplate
|
||||
@m.output()
|
||||
def record_nameplate_and_RC_tx_claim(self, nameplate):
|
||||
self._nameplate = nameplate
|
||||
self._RC.tx_claim(self._nameplate)
|
||||
@m.output()
|
||||
def RC_tx_claim(self):
|
||||
# when invoked via M.connected(), we must use the stored nameplate
|
||||
self._RC.tx_claim(self._nameplate)
|
||||
@m.output()
|
||||
def I_got_wordlist(self, mailbox):
|
||||
# TODO select wordlist based on nameplate properties, in rx_claimed
|
||||
wordlist = PGPWordList()
|
||||
self._I.got_wordlist(wordlist)
|
||||
@m.output()
|
||||
def M_got_mailbox(self, mailbox):
|
||||
self._M.got_mailbox(mailbox)
|
||||
@m.output()
|
||||
def RC_tx_release(self):
|
||||
assert self._nameplate
|
||||
self._RC.tx_release(self._nameplate)
|
||||
@m.output()
|
||||
def T_nameplate_done(self):
|
||||
self._T.nameplate_done()
|
||||
|
||||
S0A.upon(set_nameplate, enter=S1A, outputs=[record_nameplate])
|
||||
S0A.upon(connected, enter=S0B, outputs=[])
|
||||
S0A.upon(close, enter=S5A, outputs=[T_nameplate_done])
|
||||
S0B.upon(set_nameplate, enter=S2B,
|
||||
outputs=[record_nameplate_and_RC_tx_claim])
|
||||
S0B.upon(lost, enter=S0A, outputs=[])
|
||||
S0B.upon(close, enter=S5A, outputs=[T_nameplate_done])
|
||||
|
||||
S1A.upon(connected, enter=S2B, outputs=[RC_tx_claim])
|
||||
S1A.upon(close, enter=S5A, outputs=[T_nameplate_done])
|
||||
|
||||
S2A.upon(connected, enter=S2B, outputs=[RC_tx_claim])
|
||||
S2A.upon(close, enter=S4A, outputs=[])
|
||||
S2B.upon(lost, enter=S2A, outputs=[])
|
||||
S2B.upon(rx_claimed, enter=S3B, outputs=[I_got_wordlist, M_got_mailbox])
|
||||
S2B.upon(close, enter=S4B, outputs=[RC_tx_release])
|
||||
|
||||
S3A.upon(connected, enter=S3B, outputs=[])
|
||||
S3A.upon(close, enter=S4A, outputs=[])
|
||||
S3B.upon(lost, enter=S3A, outputs=[])
|
||||
#S3B.upon(rx_claimed, enter=S3B, outputs=[]) # shouldn't happen
|
||||
S3B.upon(release, enter=S4B, outputs=[RC_tx_release])
|
||||
S3B.upon(close, enter=S4B, outputs=[RC_tx_release])
|
||||
|
||||
S4A.upon(connected, enter=S4B, outputs=[RC_tx_release])
|
||||
S4A.upon(close, enter=S4A, outputs=[])
|
||||
S4B.upon(lost, enter=S4A, outputs=[])
|
||||
S4B.upon(rx_claimed, enter=S4B, outputs=[])
|
||||
S4B.upon(rx_released, enter=S5B, outputs=[T_nameplate_done])
|
||||
S4B.upon(release, enter=S4B, outputs=[]) # mailbox is lazy
|
||||
# Mailbox doesn't remember how many times it's sent a release, and will
|
||||
# re-send a new one for each peer message it receives. Ignoring it here
|
||||
# is easier than adding a new pair of states to Mailbox.
|
||||
S4B.upon(close, enter=S4B, outputs=[])
|
||||
|
||||
S5A.upon(connected, enter=S5B, outputs=[])
|
||||
S5B.upon(lost, enter=S5A, outputs=[])
|
||||
S5.upon(release, enter=S5, outputs=[]) # mailbox is lazy
|
||||
S5.upon(close, enter=S5, outputs=[])
|
68
src/wormhole/_order.py
Normal file
68
src/wormhole/_order.py
Normal file
|
@ -0,0 +1,68 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides, instance_of
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.IOrder)
|
||||
class Order(object):
|
||||
_side = attrib(validator=instance_of(type(u"")))
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
self._key = None
|
||||
self._queue = []
|
||||
def wire(self, key, receive):
|
||||
self._K = _interfaces.IKey(key)
|
||||
self._R = _interfaces.IReceive(receive)
|
||||
|
||||
@m.state(initial=True)
|
||||
def S0_no_pake(self): pass # pragma: no cover
|
||||
@m.state(terminal=True)
|
||||
def S1_yes_pake(self): pass # pragma: no cover
|
||||
|
||||
def got_message(self, side, phase, body):
|
||||
#print("ORDER[%s].got_message(%s)" % (self._side, phase))
|
||||
assert isinstance(side, type("")), type(phase)
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(body, type(b"")), type(body)
|
||||
if phase == "pake":
|
||||
self.got_pake(side, phase, body)
|
||||
else:
|
||||
self.got_non_pake(side, phase, body)
|
||||
|
||||
@m.input()
|
||||
def got_pake(self, side, phase, body): pass
|
||||
@m.input()
|
||||
def got_non_pake(self, side, phase, body): pass
|
||||
|
||||
@m.output()
|
||||
def queue(self, side, phase, body):
|
||||
assert isinstance(side, type("")), type(phase)
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(body, type(b"")), type(body)
|
||||
self._queue.append((side, phase, body))
|
||||
@m.output()
|
||||
def notify_key(self, side, phase, body):
|
||||
self._K.got_pake(body)
|
||||
@m.output()
|
||||
def drain(self, side, phase, body):
|
||||
del phase
|
||||
del body
|
||||
for (side, phase, body) in self._queue:
|
||||
self._deliver(side, phase, body)
|
||||
self._queue[:] = []
|
||||
@m.output()
|
||||
def deliver(self, side, phase, body):
|
||||
self._deliver(side, phase, body)
|
||||
|
||||
def _deliver(self, side, phase, body):
|
||||
self._R.got_message(side, phase, body)
|
||||
|
||||
S0_no_pake.upon(got_non_pake, enter=S0_no_pake, outputs=[queue])
|
||||
S0_no_pake.upon(got_pake, enter=S1_yes_pake, outputs=[notify_key, drain])
|
||||
S1_yes_pake.upon(got_non_pake, enter=S1_yes_pake, outputs=[deliver])
|
89
src/wormhole/_receive.py
Normal file
89
src/wormhole/_receive.py
Normal file
|
@ -0,0 +1,89 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides, instance_of
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
from ._key import derive_key, derive_phase_key, decrypt_data, CryptoError
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.IReceive)
|
||||
class Receive(object):
|
||||
_side = attrib(validator=instance_of(type(u"")))
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
self._key = None
|
||||
|
||||
def wire(self, boss, send):
|
||||
self._B = _interfaces.IBoss(boss)
|
||||
self._S = _interfaces.ISend(send)
|
||||
|
||||
@m.state(initial=True)
|
||||
def S0_unknown_key(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S1_unverified_key(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S2_verified_key(self): pass # pragma: no cover
|
||||
@m.state(terminal=True)
|
||||
def S3_scared(self): pass # pragma: no cover
|
||||
|
||||
# from Ordering
|
||||
def got_message(self, side, phase, body):
|
||||
assert isinstance(side, type("")), type(phase)
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(body, type(b"")), type(body)
|
||||
assert self._key
|
||||
data_key = derive_phase_key(self._key, side, phase)
|
||||
try:
|
||||
plaintext = decrypt_data(data_key, body)
|
||||
except CryptoError:
|
||||
self.got_message_bad()
|
||||
return
|
||||
self.got_message_good(phase, plaintext)
|
||||
@m.input()
|
||||
def got_message_good(self, phase, plaintext): pass
|
||||
@m.input()
|
||||
def got_message_bad(self): pass
|
||||
|
||||
# from Key
|
||||
@m.input()
|
||||
def got_key(self, key): pass
|
||||
|
||||
@m.output()
|
||||
def record_key(self, key):
|
||||
self._key = key
|
||||
@m.output()
|
||||
def S_got_verified_key(self, phase, plaintext):
|
||||
assert self._key
|
||||
self._S.got_verified_key(self._key)
|
||||
@m.output()
|
||||
def W_happy(self, phase, plaintext):
|
||||
self._B.happy()
|
||||
@m.output()
|
||||
def W_got_verifier(self, phase, plaintext):
|
||||
self._B.got_verifier(derive_key(self._key, b"wormhole:verifier"))
|
||||
@m.output()
|
||||
def W_got_message(self, phase, plaintext):
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(plaintext, type(b"")), type(plaintext)
|
||||
self._B.got_message(phase, plaintext)
|
||||
@m.output()
|
||||
def W_scared(self):
|
||||
self._B.scared()
|
||||
|
||||
S0_unknown_key.upon(got_key, enter=S1_unverified_key, outputs=[record_key])
|
||||
S1_unverified_key.upon(got_message_good, enter=S2_verified_key,
|
||||
outputs=[S_got_verified_key,
|
||||
W_happy, W_got_verifier, W_got_message])
|
||||
S1_unverified_key.upon(got_message_bad, enter=S3_scared,
|
||||
outputs=[W_scared])
|
||||
S2_verified_key.upon(got_message_bad, enter=S3_scared,
|
||||
outputs=[W_scared])
|
||||
S2_verified_key.upon(got_message_good, enter=S2_verified_key,
|
||||
outputs=[W_got_message])
|
||||
S3_scared.upon(got_message_good, enter=S3_scared, outputs=[])
|
||||
S3_scared.upon(got_message_bad, enter=S3_scared, outputs=[])
|
||||
|
250
src/wormhole/_rendezvous.py
Normal file
250
src/wormhole/_rendezvous.py
Normal file
|
@ -0,0 +1,250 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
import os
|
||||
from six.moves.urllib_parse import urlparse
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides, instance_of
|
||||
from zope.interface import implementer
|
||||
from twisted.python import log
|
||||
from twisted.internet import defer, endpoints
|
||||
from twisted.application import internet
|
||||
from autobahn.twisted import websocket
|
||||
from . import _interfaces, errors
|
||||
from .util import (bytes_to_hexstr, hexstr_to_bytes,
|
||||
bytes_to_dict, dict_to_bytes)
|
||||
|
||||
class WSClient(websocket.WebSocketClientProtocol):
|
||||
def onConnect(self, response):
|
||||
# this fires during WebSocket negotiation, and isn't very useful
|
||||
# unless you want to modify the protocol settings
|
||||
#print("onConnect", response)
|
||||
pass
|
||||
|
||||
def onOpen(self, *args):
|
||||
# this fires when the WebSocket is ready to go. No arguments
|
||||
#print("onOpen", args)
|
||||
#self.wormhole_open = True
|
||||
self._RC.ws_open(self)
|
||||
|
||||
def onMessage(self, payload, isBinary):
|
||||
assert not isBinary
|
||||
try:
|
||||
self._RC.ws_message(payload)
|
||||
except:
|
||||
from twisted.python.failure import Failure
|
||||
print("LOGGING", Failure())
|
||||
log.err()
|
||||
raise
|
||||
|
||||
def onClose(self, wasClean, code, reason):
|
||||
#print("onClose")
|
||||
self._RC.ws_close(wasClean, code, reason)
|
||||
#if self.wormhole_open:
|
||||
# self.wormhole._ws_closed(wasClean, code, reason)
|
||||
#else:
|
||||
# # we closed before establishing a connection (onConnect) or
|
||||
# # finishing WebSocket negotiation (onOpen): errback
|
||||
# self.factory.d.errback(error.ConnectError(reason))
|
||||
|
||||
class WSFactory(websocket.WebSocketClientFactory):
|
||||
protocol = WSClient
|
||||
def __init__(self, RC, *args, **kwargs):
|
||||
websocket.WebSocketClientFactory.__init__(self, *args, **kwargs)
|
||||
self._RC = RC
|
||||
|
||||
def buildProtocol(self, addr):
|
||||
proto = websocket.WebSocketClientFactory.buildProtocol(self, addr)
|
||||
proto._RC = self._RC
|
||||
#proto.wormhole_open = False
|
||||
return proto
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.IRendezvousConnector)
|
||||
class RendezvousConnector(object):
|
||||
_url = attrib(validator=instance_of(type(u"")))
|
||||
_appid = attrib(validator=instance_of(type(u"")))
|
||||
_side = attrib(validator=instance_of(type(u"")))
|
||||
_reactor = attrib()
|
||||
_journal = attrib(validator=provides(_interfaces.IJournal))
|
||||
_tor_manager = attrib() # TODO: ITorManager or None
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
self._trace = None
|
||||
self._ws = None
|
||||
f = WSFactory(self, self._url)
|
||||
f.setProtocolOptions(autoPingInterval=60, autoPingTimeout=600)
|
||||
p = urlparse(self._url)
|
||||
ep = self._make_endpoint(p.hostname, p.port or 80)
|
||||
# TODO: change/wrap ClientService to fail if the first attempt fails
|
||||
self._connector = internet.ClientService(ep, f)
|
||||
|
||||
def set_trace(self, f):
|
||||
self._trace = f
|
||||
def _debug(self, what):
|
||||
if self._trace:
|
||||
self._trace(old_state="", input=what, new_state="", output=None)
|
||||
|
||||
def _make_endpoint(self, hostname, port):
|
||||
if self._tor_manager:
|
||||
return self._tor_manager.get_endpoint_for(hostname, port)
|
||||
return endpoints.HostnameEndpoint(self._reactor, hostname, port)
|
||||
|
||||
def wire(self, boss, nameplate, mailbox, allocator, lister, terminator):
|
||||
self._B = _interfaces.IBoss(boss)
|
||||
self._N = _interfaces.INameplate(nameplate)
|
||||
self._M = _interfaces.IMailbox(mailbox)
|
||||
self._A = _interfaces.IAllocator(allocator)
|
||||
self._L = _interfaces.ILister(lister)
|
||||
self._T = _interfaces.ITerminator(terminator)
|
||||
|
||||
# from Boss
|
||||
def start(self):
|
||||
self._connector.startService()
|
||||
|
||||
# from Mailbox
|
||||
def tx_claim(self, nameplate):
|
||||
self._tx("claim", nameplate=nameplate)
|
||||
|
||||
def tx_open(self, mailbox):
|
||||
self._tx("open", mailbox=mailbox)
|
||||
|
||||
def tx_add(self, phase, body):
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(body, type(b"")), type(body)
|
||||
self._tx("add", phase=phase, body=bytes_to_hexstr(body))
|
||||
|
||||
def tx_release(self, nameplate):
|
||||
self._tx("release", nameplate=nameplate)
|
||||
|
||||
def tx_close(self, mailbox, mood):
|
||||
self._tx("close", mailbox=mailbox, mood=mood)
|
||||
|
||||
def stop(self):
|
||||
d = defer.maybeDeferred(self._connector.stopService)
|
||||
d.addErrback(log.err) # TODO: deliver error upstairs?
|
||||
d.addBoth(self._stopped)
|
||||
|
||||
|
||||
# from Lister
|
||||
def tx_list(self):
|
||||
self._tx("list")
|
||||
|
||||
# from Code
|
||||
def tx_allocate(self):
|
||||
self._tx("allocate")
|
||||
|
||||
# from our WSClient (the WebSocket protocol)
|
||||
def ws_open(self, proto):
|
||||
self._debug("R.connected")
|
||||
self._ws = proto
|
||||
try:
|
||||
self._tx("bind", appid=self._appid, side=self._side)
|
||||
self._N.connected()
|
||||
self._M.connected()
|
||||
self._L.connected()
|
||||
self._A.connected()
|
||||
except Exception as e:
|
||||
self._B.error(e)
|
||||
raise
|
||||
self._debug("R.connected finished notifications")
|
||||
|
||||
def ws_message(self, payload):
|
||||
msg = bytes_to_dict(payload)
|
||||
if msg["type"] != "ack":
|
||||
self._debug("R.rx(%s %s%s)" %
|
||||
(msg["type"], msg.get("phase",""),
|
||||
"[mine]" if msg.get("side","") == self._side else "",
|
||||
))
|
||||
|
||||
self._timing.add("ws_receive", _side=self._side, message=msg)
|
||||
mtype = msg["type"]
|
||||
meth = getattr(self, "_response_handle_"+mtype, None)
|
||||
if not meth:
|
||||
# make tests fail, but real application will ignore it
|
||||
log.err(errors._UnknownMessageTypeError("Unknown inbound message type %r" % (msg,)))
|
||||
return
|
||||
try:
|
||||
return meth(msg)
|
||||
except Exception as e:
|
||||
log.err(e)
|
||||
self._B.error(e)
|
||||
raise
|
||||
|
||||
def ws_close(self, wasClean, code, reason):
|
||||
self._debug("R.lost")
|
||||
self._ws = None
|
||||
self._N.lost()
|
||||
self._M.lost()
|
||||
self._L.lost()
|
||||
self._A.lost()
|
||||
|
||||
# internal
|
||||
def _stopped(self, res):
|
||||
self._T.stopped()
|
||||
|
||||
def _tx(self, mtype, **kwargs):
|
||||
assert self._ws
|
||||
# msgid is used by misc/dump-timing.py to correlate our sends with
|
||||
# their receives, and vice versa. They are also correlated with the
|
||||
# ACKs we get back from the server (which we otherwise ignore). There
|
||||
# are so few messages, 16 bits is enough to be mostly-unique.
|
||||
kwargs["id"] = bytes_to_hexstr(os.urandom(2))
|
||||
kwargs["type"] = mtype
|
||||
self._debug("R.tx(%s %s)" % (mtype.upper(), kwargs.get("phase", "")))
|
||||
payload = dict_to_bytes(kwargs)
|
||||
self._timing.add("ws_send", _side=self._side, **kwargs)
|
||||
self._ws.sendMessage(payload, False)
|
||||
|
||||
def _response_handle_allocated(self, msg):
|
||||
nameplate = msg["nameplate"]
|
||||
assert isinstance(nameplate, type("")), type(nameplate)
|
||||
self._A.rx_allocated(nameplate)
|
||||
|
||||
def _response_handle_nameplates(self, msg):
|
||||
# we get list of {id: ID}, with maybe more attributes in the future
|
||||
nameplates = msg["nameplates"]
|
||||
assert isinstance(nameplates, list), type(nameplates)
|
||||
nids = set()
|
||||
for n in nameplates:
|
||||
assert isinstance(n, dict), type(n)
|
||||
nameplate_id = n["id"]
|
||||
assert isinstance(nameplate_id, type("")), type(nameplate_id)
|
||||
nids.add(nameplate_id)
|
||||
# deliver a set of nameplate ids
|
||||
self._L.rx_nameplates(nids)
|
||||
|
||||
def _response_handle_ack(self, msg):
|
||||
pass
|
||||
|
||||
def _response_handle_error(self, msg):
|
||||
# the server sent us a type=error. Most cases are due to our mistakes
|
||||
# (malformed protocol messages, sending things in the wrong order),
|
||||
# but it can also result from CrowdedError (more than two clients
|
||||
# using the same channel).
|
||||
err = msg["error"]
|
||||
orig = msg["orig"]
|
||||
self._B.rx_error(err, orig)
|
||||
|
||||
def _response_handle_welcome(self, msg):
|
||||
self._B.rx_welcome(msg["welcome"])
|
||||
|
||||
def _response_handle_claimed(self, msg):
|
||||
mailbox = msg["mailbox"]
|
||||
assert isinstance(mailbox, type("")), type(mailbox)
|
||||
self._N.rx_claimed(mailbox)
|
||||
|
||||
def _response_handle_message(self, msg):
|
||||
side = msg["side"]
|
||||
phase = msg["phase"]
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
body = hexstr_to_bytes(msg["body"]) # bytes
|
||||
self._M.rx_message(side, phase, body)
|
||||
|
||||
def _response_handle_released(self, msg):
|
||||
self._N.rx_released()
|
||||
|
||||
def _response_handle_closed(self, msg):
|
||||
self._M.rx_closed()
|
||||
|
||||
|
||||
# record, message, payload, packet, bundle, ciphertext, plaintext
|
201
src/wormhole/_rlcompleter.py
Normal file
201
src/wormhole/_rlcompleter.py
Normal file
|
@ -0,0 +1,201 @@
|
|||
from __future__ import print_function, unicode_literals
|
||||
import os, traceback
|
||||
from sys import stderr
|
||||
try:
|
||||
import readline
|
||||
except ImportError:
|
||||
readline = None
|
||||
from six.moves import input
|
||||
from attr import attrs, attrib
|
||||
from twisted.internet.defer import inlineCallbacks, returnValue
|
||||
from twisted.internet.threads import deferToThread, blockingCallFromThread
|
||||
from .errors import KeyFormatError, AlreadyInputNameplateError
|
||||
|
||||
errf = None
|
||||
if 0:
|
||||
errf = open("err", "w") if os.path.exists("err") else None
|
||||
def debug(*args, **kwargs):
|
||||
if errf:
|
||||
print(*args, file=errf, **kwargs)
|
||||
errf.flush()
|
||||
|
||||
@attrs
|
||||
class CodeInputter(object):
|
||||
_input_helper = attrib()
|
||||
_reactor = attrib()
|
||||
def __attrs_post_init__(self):
|
||||
self.used_completion = False
|
||||
self._matches = None
|
||||
# once we've claimed the nameplate, we can't go back
|
||||
self._committed_nameplate = None # or string
|
||||
|
||||
def bcft(self, f, *a, **kw):
|
||||
return blockingCallFromThread(self._reactor, f, *a, **kw)
|
||||
|
||||
def completer(self, text, state):
|
||||
try:
|
||||
return self._wrapped_completer(text, state)
|
||||
except Exception as e:
|
||||
# completer exceptions are normally silently discarded, which
|
||||
# makes debugging challenging
|
||||
print("completer exception: %s" % e)
|
||||
traceback.print_exc()
|
||||
raise e
|
||||
|
||||
def _wrapped_completer(self, text, state):
|
||||
self.used_completion = True
|
||||
# if we get here, then readline must be active
|
||||
ct = readline.get_completion_type()
|
||||
if state == 0:
|
||||
debug("completer starting (%s) (state=0) (ct=%d)" % (text, ct))
|
||||
self._matches = self._commit_and_build_completions(text)
|
||||
debug(" matches:", " ".join(["'%s'" % m for m in self._matches]))
|
||||
else:
|
||||
debug(" s%d t'%s' ct=%d" % (state, text, ct))
|
||||
|
||||
if state >= len(self._matches):
|
||||
debug(" returning None")
|
||||
return None
|
||||
debug(" returning '%s'" % self._matches[state])
|
||||
return self._matches[state]
|
||||
|
||||
def _commit_and_build_completions(self, text):
|
||||
ih = self._input_helper
|
||||
if "-" in text:
|
||||
got_nameplate = True
|
||||
nameplate, words = text.split("-", 1)
|
||||
else:
|
||||
got_nameplate = False
|
||||
nameplate = text # partial
|
||||
|
||||
# 'text' is one of these categories:
|
||||
# "" or "12": complete on nameplates (all that match, maybe just one)
|
||||
|
||||
# "123-": if we haven't already committed to a nameplate, commit and
|
||||
# wait for the wordlist. Then (either way) return the whole wordlist.
|
||||
|
||||
# "123-supp": if we haven't already committed to a nameplate, commit
|
||||
# and wait for the wordlist. Then (either way) return all current
|
||||
# matches.
|
||||
|
||||
if self._committed_nameplate:
|
||||
if not got_nameplate or nameplate != self._committed_nameplate:
|
||||
# they deleted past the committment point: we can't use
|
||||
# this. For now, bail, but in the future let's find a
|
||||
# gentler way to encourage them to not do that.
|
||||
raise AlreadyInputNameplateError("nameplate (%s-) already entered, cannot go back" % self._committed_nameplate)
|
||||
if not got_nameplate:
|
||||
# we're completing on nameplates: "" or "12" or "123"
|
||||
self.bcft(ih.refresh_nameplates) # results arrive later
|
||||
debug(" getting nameplates")
|
||||
completions = self.bcft(ih.get_nameplate_completions, nameplate)
|
||||
else: # "123-" or "123-supp"
|
||||
# time to commit to this nameplate, if they haven't already
|
||||
if not self._committed_nameplate:
|
||||
debug(" choose_nameplate(%s)" % nameplate)
|
||||
self.bcft(ih.choose_nameplate, nameplate)
|
||||
self._committed_nameplate = nameplate
|
||||
|
||||
# Now we want to wait for the wordlist to be available. If
|
||||
# the user just typed "12-supp TAB", we'll claim "12" but
|
||||
# will need a server roundtrip to discover that "supportive"
|
||||
# is the only match. If we don't block, we'd return an empty
|
||||
# wordlist to readline (which will beep and show no
|
||||
# completions). *Then* when the user hits TAB again a moment
|
||||
# later (after the wordlist has arrived, but the user hasn't
|
||||
# modified the input line since the previous empty response),
|
||||
# readline would show one match but not complete anything.
|
||||
|
||||
# In general we want to avoid returning empty lists to
|
||||
# readline. If the user hits TAB when typing in the nameplate
|
||||
# (before the sender has established one, or before we're
|
||||
# heard about it from the server), it can't be helped. But
|
||||
# for the rest of the code, a simple wait-for-wordlist will
|
||||
# improve the user experience.
|
||||
self.bcft(ih.when_wordlist_is_available) # blocks on CLAIM
|
||||
# and we're completing on words now
|
||||
debug(" getting words (%s)" % (words,))
|
||||
completions = [nameplate+"-"+c
|
||||
for c in self.bcft(ih.get_word_completions, words)]
|
||||
|
||||
# rlcompleter wants full strings
|
||||
return sorted(completions)
|
||||
|
||||
def finish(self, text):
|
||||
if "-" not in text:
|
||||
raise KeyFormatError("incomplete wormhole code")
|
||||
nameplate, words = text.split("-", 1)
|
||||
|
||||
if self._committed_nameplate:
|
||||
if nameplate != self._committed_nameplate:
|
||||
# they deleted past the committment point: we can't use
|
||||
# this. For now, bail, but in the future let's find a
|
||||
# gentler way to encourage them to not do that.
|
||||
raise AlreadyInputNameplateError("nameplate (%s-) already entered, cannot go back" % self._committed_nameplate)
|
||||
else:
|
||||
debug(" choose_nameplate(%s)" % nameplate)
|
||||
self._input_helper.choose_nameplate(nameplate)
|
||||
debug(" choose_words(%s)" % words)
|
||||
self._input_helper.choose_words(words)
|
||||
|
||||
def _input_code_with_completion(prompt, input_helper, reactor):
|
||||
c = CodeInputter(input_helper, reactor)
|
||||
if readline is not None:
|
||||
if readline.__doc__ and "libedit" in readline.__doc__:
|
||||
readline.parse_and_bind("bind ^I rl_complete")
|
||||
else:
|
||||
readline.parse_and_bind("tab: complete")
|
||||
readline.set_completer(c.completer)
|
||||
readline.set_completer_delims("")
|
||||
debug("==== readline-based completion is prepared")
|
||||
else:
|
||||
debug("==== unable to import readline, disabling completion")
|
||||
pass
|
||||
code = input(prompt)
|
||||
# Code is str(bytes) on py2, and str(unicode) on py3. We want unicode.
|
||||
if isinstance(code, bytes):
|
||||
code = code.decode("utf-8")
|
||||
c.finish(code)
|
||||
return c.used_completion
|
||||
|
||||
def warn_readline():
|
||||
# When our process receives a SIGINT, Twisted's SIGINT handler will
|
||||
# stop the reactor and wait for all threads to terminate before the
|
||||
# process exits. However, if we were waiting for
|
||||
# input_code_with_completion() when SIGINT happened, the readline
|
||||
# thread will be blocked waiting for something on stdin. Trick the
|
||||
# user into satisfying the blocking read so we can exit.
|
||||
print("\nCommand interrupted: please press Return to quit", file=stderr)
|
||||
|
||||
# Other potential approaches to this problem:
|
||||
# * hard-terminate our process with os._exit(1), but make sure the
|
||||
# tty gets reset to a normal mode ("cooked"?) first, so that the
|
||||
# next shell command the user types is echoed correctly
|
||||
# * track down the thread (t.p.threadable.getThreadID from inside the
|
||||
# thread), get a cffi binding to pthread_kill, deliver SIGINT to it
|
||||
# * allocate a pty pair (pty.openpty), replace sys.stdin with the
|
||||
# slave, build a pty bridge that copies bytes (and other PTY
|
||||
# things) from the real stdin to the master, then close the slave
|
||||
# at shutdown, so readline sees EOF
|
||||
# * write tab-completion and basic editing (TTY raw mode,
|
||||
# backspace-is-erase) without readline, probably with curses or
|
||||
# twisted.conch.insults
|
||||
# * write a separate program to get codes (maybe just "wormhole
|
||||
# --internal-get-code"), run it as a subprocess, let it inherit
|
||||
# stdin/stdout, send it SIGINT when we receive SIGINT ourselves. It
|
||||
# needs an RPC mechanism (over some extra file descriptors) to ask
|
||||
# us to fetch the current nameplate_id list.
|
||||
#
|
||||
# Note that hard-terminating our process with os.kill(os.getpid(),
|
||||
# signal.SIGKILL), or SIGTERM, doesn't seem to work: the thread
|
||||
# doesn't see the signal, and we must still wait for stdin to make
|
||||
# readline finish.
|
||||
|
||||
@inlineCallbacks
|
||||
def input_with_completion(prompt, input_helper, reactor):
|
||||
t = reactor.addSystemEventTrigger("before", "shutdown", warn_readline)
|
||||
#input_helper.refresh_nameplates()
|
||||
used_completion = yield deferToThread(_input_code_with_completion,
|
||||
prompt, input_helper, reactor)
|
||||
reactor.removeSystemEventTrigger(t)
|
||||
returnValue(used_completion)
|
64
src/wormhole/_send.py
Normal file
64
src/wormhole/_send.py
Normal file
|
@ -0,0 +1,64 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from attr import attrs, attrib
|
||||
from attr.validators import provides, instance_of
|
||||
from zope.interface import implementer
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
from ._key import derive_phase_key, encrypt_data
|
||||
|
||||
@attrs
|
||||
@implementer(_interfaces.ISend)
|
||||
class Send(object):
|
||||
_side = attrib(validator=instance_of(type(u"")))
|
||||
_timing = attrib(validator=provides(_interfaces.ITiming))
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
self._queue = []
|
||||
|
||||
def wire(self, mailbox):
|
||||
self._M = _interfaces.IMailbox(mailbox)
|
||||
|
||||
@m.state(initial=True)
|
||||
def S0_no_key(self): pass # pragma: no cover
|
||||
@m.state(terminal=True)
|
||||
def S1_verified_key(self): pass # pragma: no cover
|
||||
|
||||
# from Receive
|
||||
@m.input()
|
||||
def got_verified_key(self, key): pass
|
||||
# from Boss
|
||||
@m.input()
|
||||
def send(self, phase, plaintext): pass
|
||||
|
||||
@m.output()
|
||||
def queue(self, phase, plaintext):
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(plaintext, type(b"")), type(plaintext)
|
||||
self._queue.append((phase, plaintext))
|
||||
@m.output()
|
||||
def record_key(self, key):
|
||||
self._key = key
|
||||
@m.output()
|
||||
def drain(self, key):
|
||||
del key
|
||||
for (phase, plaintext) in self._queue:
|
||||
self._encrypt_and_send(phase, plaintext)
|
||||
self._queue[:] = []
|
||||
@m.output()
|
||||
def deliver(self, phase, plaintext):
|
||||
assert isinstance(phase, type("")), type(phase)
|
||||
assert isinstance(plaintext, type(b"")), type(plaintext)
|
||||
self._encrypt_and_send(phase, plaintext)
|
||||
|
||||
def _encrypt_and_send(self, phase, plaintext):
|
||||
assert self._key
|
||||
data_key = derive_phase_key(self._key, self._side, phase)
|
||||
encrypted = encrypt_data(data_key, plaintext)
|
||||
self._M.add_message(phase, encrypted)
|
||||
|
||||
S0_no_key.upon(send, enter=S0_no_key, outputs=[queue])
|
||||
S0_no_key.upon(got_verified_key, enter=S1_verified_key,
|
||||
outputs=[record_key, drain])
|
||||
S1_verified_key.upon(send, enter=S1_verified_key, outputs=[deliver])
|
106
src/wormhole/_terminator.py
Normal file
106
src/wormhole/_terminator.py
Normal file
|
@ -0,0 +1,106 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
from automat import MethodicalMachine
|
||||
from . import _interfaces
|
||||
|
||||
@implementer(_interfaces.ITerminator)
|
||||
class Terminator(object):
|
||||
m = MethodicalMachine()
|
||||
set_trace = getattr(m, "setTrace", lambda self, f: None)
|
||||
|
||||
def __init__(self):
|
||||
self._mood = None
|
||||
|
||||
def wire(self, boss, rendezvous_connector, nameplate, mailbox):
|
||||
self._B = _interfaces.IBoss(boss)
|
||||
self._RC = _interfaces.IRendezvousConnector(rendezvous_connector)
|
||||
self._N = _interfaces.INameplate(nameplate)
|
||||
self._M = _interfaces.IMailbox(mailbox)
|
||||
|
||||
# 4*2-1 main states:
|
||||
# (nm, m, n, 0): nameplate and/or mailbox is active
|
||||
# (o, ""): open (not-yet-closing), or trying to close
|
||||
# S0 is special: we don't hang out in it
|
||||
|
||||
# TODO: rename o to 0, "" to 1. "S1" is special/terminal
|
||||
# so S0nm/S0n/S0m/S0, S1nm/S1n/S1m/(S1)
|
||||
|
||||
# We start in Snmo (non-closing). When both nameplate and mailboxes are
|
||||
# done, and we're closing, then we stop the RendezvousConnector
|
||||
|
||||
@m.state(initial=True)
|
||||
def Snmo(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def Smo(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def Sno(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S0o(self): pass # pragma: no cover
|
||||
|
||||
@m.state()
|
||||
def Snm(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def Sm(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def Sn(self): pass # pragma: no cover
|
||||
#@m.state()
|
||||
#def S0(self): pass # unused
|
||||
|
||||
@m.state()
|
||||
def S_stopping(self): pass # pragma: no cover
|
||||
@m.state()
|
||||
def S_stopped(self, terminal=True): pass # pragma: no cover
|
||||
|
||||
# from Boss
|
||||
@m.input()
|
||||
def close(self, mood): pass
|
||||
|
||||
# from Nameplate
|
||||
@m.input()
|
||||
def nameplate_done(self): pass
|
||||
|
||||
# from Mailbox
|
||||
@m.input()
|
||||
def mailbox_done(self): pass
|
||||
|
||||
# from RendezvousConnector
|
||||
@m.input()
|
||||
def stopped(self): pass
|
||||
|
||||
|
||||
@m.output()
|
||||
def close_nameplate(self, mood):
|
||||
self._N.close() # ignores mood
|
||||
@m.output()
|
||||
def close_mailbox(self, mood):
|
||||
self._M.close(mood)
|
||||
|
||||
@m.output()
|
||||
def ignore_mood_and_RC_stop(self, mood):
|
||||
self._RC.stop()
|
||||
@m.output()
|
||||
def RC_stop(self):
|
||||
self._RC.stop()
|
||||
@m.output()
|
||||
def B_closed(self):
|
||||
self._B.closed()
|
||||
|
||||
Snmo.upon(mailbox_done, enter=Sno, outputs=[])
|
||||
Snmo.upon(close, enter=Snm, outputs=[close_nameplate, close_mailbox])
|
||||
Snmo.upon(nameplate_done, enter=Smo, outputs=[])
|
||||
|
||||
Sno.upon(close, enter=Sn, outputs=[close_nameplate, close_mailbox])
|
||||
Sno.upon(nameplate_done, enter=S0o, outputs=[])
|
||||
|
||||
Smo.upon(close, enter=Sm, outputs=[close_nameplate, close_mailbox])
|
||||
Smo.upon(mailbox_done, enter=S0o, outputs=[])
|
||||
|
||||
Snm.upon(mailbox_done, enter=Sn, outputs=[])
|
||||
Snm.upon(nameplate_done, enter=Sm, outputs=[])
|
||||
|
||||
Sn.upon(nameplate_done, enter=S_stopping, outputs=[RC_stop])
|
||||
S0o.upon(close, enter=S_stopping,
|
||||
outputs=[close_nameplate, close_mailbox, ignore_mood_and_RC_stop])
|
||||
Sm.upon(mailbox_done, enter=S_stopping, outputs=[RC_stop])
|
||||
|
||||
S_stopping.upon(stopped, enter=S_stopped, outputs=[B_closed])
|
|
@ -1,4 +1,8 @@
|
|||
from __future__ import unicode_literals
|
||||
from __future__ import unicode_literals, print_function
|
||||
import os
|
||||
from zope.interface import implementer
|
||||
from ._interfaces import IWordlist
|
||||
|
||||
# The PGP Word List, which maps bytes to phonetically-distinct words. There
|
||||
# are two lists, even and odd, and encodings should alternate between then to
|
||||
# detect dropped words. https://en.wikipedia.org/wiki/PGP_Words
|
||||
|
@ -146,13 +150,44 @@ byte_to_even_word = dict([(unhexlify(k.encode("ascii")), both_words[0])
|
|||
byte_to_odd_word = dict([(unhexlify(k.encode("ascii")), both_words[1])
|
||||
for k,both_words
|
||||
in raw_words.items()])
|
||||
|
||||
even_words_lowercase, odd_words_lowercase = set(), set()
|
||||
even_words_lowercase_to_byte, odd_words_lowercase_to_byte = dict(), dict()
|
||||
|
||||
for k,both_words in raw_words.items():
|
||||
even_word, odd_word = both_words
|
||||
|
||||
even_words_lowercase.add(even_word.lower())
|
||||
even_words_lowercase_to_byte[even_word.lower()] = unhexlify(k.encode("ascii"))
|
||||
|
||||
odd_words_lowercase.add(odd_word.lower())
|
||||
odd_words_lowercase_to_byte[odd_word.lower()] = unhexlify(k.encode("ascii"))
|
||||
|
||||
@implementer(IWordlist)
|
||||
class PGPWordList(object):
|
||||
def get_completions(self, prefix, num_words=2):
|
||||
# start with the odd words
|
||||
count = prefix.count("-")
|
||||
if count % 2 == 0:
|
||||
words = odd_words_lowercase
|
||||
else:
|
||||
words = even_words_lowercase
|
||||
last_partial_word = prefix.split("-")[-1]
|
||||
lp = len(last_partial_word)
|
||||
completions = set()
|
||||
for word in words:
|
||||
if word.startswith(last_partial_word):
|
||||
if lp == 0:
|
||||
suffix = prefix + word
|
||||
else:
|
||||
suffix = prefix[:-lp] + word
|
||||
# append a hyphen if we expect more words
|
||||
if count+1 < num_words:
|
||||
suffix += "-"
|
||||
completions.add(suffix)
|
||||
return completions
|
||||
|
||||
def choose_words(self, length):
|
||||
words = []
|
||||
for i in range(length):
|
||||
# we start with an "odd word"
|
||||
if i % 2 == 0:
|
||||
words.append(byte_to_odd_word[os.urandom(1)].lower())
|
||||
else:
|
||||
words.append(byte_to_even_word[os.urandom(1)].lower())
|
||||
return "-".join(words)
|
|
@ -1,16 +0,0 @@
|
|||
from __future__ import print_function, unicode_literals
|
||||
import sys
|
||||
from weakref import ref
|
||||
|
||||
class ChannelMonitor:
|
||||
def __init__(self):
|
||||
self._open_channels = set()
|
||||
def add(self, w):
|
||||
wr = ref(w, self._lost)
|
||||
self._open_channels.add(wr)
|
||||
def _lost(self, wr):
|
||||
print("Error: a Wormhole instance was not closed", file=sys.stderr)
|
||||
def close(self, w):
|
||||
self._open_channels.discard(ref(w))
|
||||
|
||||
monitor = ChannelMonitor() # singleton
|
|
@ -5,14 +5,17 @@ from humanize import naturalsize
|
|||
from twisted.internet import reactor
|
||||
from twisted.internet.defer import inlineCallbacks, returnValue
|
||||
from twisted.python import log
|
||||
from ..wormhole import wormhole
|
||||
from wormhole import create, input_with_completion, __version__
|
||||
from ..transit import TransitReceiver
|
||||
from ..errors import TransferError, WormholeClosedError, NoTorError
|
||||
from ..util import (dict_to_bytes, bytes_to_dict, bytes_to_hexstr,
|
||||
estimate_free_space)
|
||||
from .welcome import CLIWelcomeHandler
|
||||
|
||||
APPID = u"lothar.com/wormhole/text-or-file-xfer"
|
||||
VERIFY_TIMER = 1
|
||||
|
||||
KEY_TIMER = 1.0
|
||||
VERIFY_TIMER = 1.0
|
||||
|
||||
class RespondError(Exception):
|
||||
def __init__(self, response):
|
||||
|
@ -61,8 +64,13 @@ class TwistedReceiver:
|
|||
# with the user handing off the wormhole code
|
||||
yield self._tor_manager.start()
|
||||
|
||||
w = wormhole(self.args.appid or APPID, self.args.relay_url,
|
||||
self._reactor, self._tor_manager, timing=self.args.timing)
|
||||
wh = CLIWelcomeHandler(self.args.relay_url, __version__,
|
||||
self.args.stderr)
|
||||
w = create(self.args.appid or APPID, self.args.relay_url,
|
||||
self._reactor,
|
||||
tor_manager=self._tor_manager,
|
||||
timing=self.args.timing,
|
||||
welcome_handler=wh.handle_welcome)
|
||||
# I wanted to do this instead:
|
||||
#
|
||||
# try:
|
||||
|
@ -74,23 +82,71 @@ class TwistedReceiver:
|
|||
# as coming from the "yield self._go" line, which wasn't very useful
|
||||
# for tracking it down.
|
||||
d = self._go(w)
|
||||
d.addBoth(w.close)
|
||||
|
||||
# if we succeed, we should close and return the w.close results
|
||||
# (which might be an error)
|
||||
@inlineCallbacks
|
||||
def _good(res):
|
||||
yield w.close() # wait for ack
|
||||
returnValue(res)
|
||||
|
||||
# if we raise an error, we should close and then return the original
|
||||
# error (the close might give us an error, but it isn't as important
|
||||
# as the original one)
|
||||
@inlineCallbacks
|
||||
def _bad(f):
|
||||
log.err(f)
|
||||
try:
|
||||
yield w.close() # might be an error too
|
||||
except:
|
||||
pass
|
||||
returnValue(f)
|
||||
|
||||
d.addCallbacks(_good, _bad)
|
||||
yield d
|
||||
|
||||
@inlineCallbacks
|
||||
def _go(self, w):
|
||||
yield self._handle_code(w)
|
||||
yield w.establish_key()
|
||||
def on_slow_connection():
|
||||
print(u"Key established, waiting for confirmation...",
|
||||
file=self.args.stderr)
|
||||
notify = self._reactor.callLater(VERIFY_TIMER, on_slow_connection)
|
||||
|
||||
def on_slow_key():
|
||||
print(u"Waiting for sender...", file=self.args.stderr)
|
||||
notify = self._reactor.callLater(KEY_TIMER, on_slow_key)
|
||||
try:
|
||||
verifier = yield w.verify()
|
||||
# We wait here until we connect to the server and see the senders
|
||||
# PAKE message. If we used set_code() in the "human-selected
|
||||
# offline codes" mode, then the sender might not have even
|
||||
# started yet, so we might be sitting here for a while. Because
|
||||
# of that possibility, it's probably not appropriate to give up
|
||||
# automatically after some timeout. The user can express their
|
||||
# impatience by quitting the program with control-C.
|
||||
yield w.when_key()
|
||||
finally:
|
||||
if not notify.called:
|
||||
notify.cancel()
|
||||
self._show_verifier(verifier)
|
||||
|
||||
def on_slow_verification():
|
||||
print(u"Key established, waiting for confirmation...",
|
||||
file=self.args.stderr)
|
||||
notify = self._reactor.callLater(VERIFY_TIMER, on_slow_verification)
|
||||
try:
|
||||
# We wait here until we've seen their VERSION message (which they
|
||||
# send after seeing our PAKE message, and has the side-effect of
|
||||
# verifying that we both share the same key). There is a
|
||||
# round-trip between these two events, and we could experience a
|
||||
# significant delay here if:
|
||||
# * the relay server is being restarted
|
||||
# * the network is very slow
|
||||
# * the sender is very slow
|
||||
# * the sender has quit (in which case we may wait forever)
|
||||
|
||||
# It would be reasonable to give up after waiting here for too
|
||||
# long.
|
||||
verifier_bytes = yield w.when_verified()
|
||||
finally:
|
||||
if not notify.called:
|
||||
notify.cancel()
|
||||
self._show_verifier(verifier_bytes)
|
||||
|
||||
want_offer = True
|
||||
done = False
|
||||
|
@ -127,7 +183,7 @@ class TwistedReceiver:
|
|||
@inlineCallbacks
|
||||
def _get_data(self, w):
|
||||
# this may raise WrongPasswordError
|
||||
them_bytes = yield w.get()
|
||||
them_bytes = yield w.when_received()
|
||||
them_d = bytes_to_dict(them_bytes)
|
||||
if "error" in them_d:
|
||||
raise TransferError(them_d["error"])
|
||||
|
@ -142,11 +198,17 @@ class TwistedReceiver:
|
|||
if code:
|
||||
w.set_code(code)
|
||||
else:
|
||||
yield w.input_code("Enter receive wormhole code: ",
|
||||
self.args.code_length)
|
||||
prompt = "Enter receive wormhole code: "
|
||||
used_completion = yield input_with_completion(prompt,
|
||||
w.input_code(),
|
||||
self._reactor)
|
||||
if not used_completion:
|
||||
print(" (note: you can use <Tab> to complete words)",
|
||||
file=self.args.stderr)
|
||||
yield w.when_code()
|
||||
|
||||
def _show_verifier(self, verifier):
|
||||
verifier_hex = bytes_to_hexstr(verifier)
|
||||
def _show_verifier(self, verifier_bytes):
|
||||
verifier_hex = bytes_to_hexstr(verifier_bytes)
|
||||
if self.args.verify:
|
||||
self._msg(u"Verifier %s." % verifier_hex)
|
||||
|
||||
|
|
|
@ -7,9 +7,10 @@ from twisted.protocols import basic
|
|||
from twisted.internet import reactor
|
||||
from twisted.internet.defer import inlineCallbacks, returnValue
|
||||
from ..errors import TransferError, WormholeClosedError, NoTorError
|
||||
from ..wormhole import wormhole
|
||||
from wormhole import create, __version__
|
||||
from ..transit import TransitSender
|
||||
from ..util import dict_to_bytes, bytes_to_dict, bytes_to_hexstr
|
||||
from .welcome import CLIWelcomeHandler
|
||||
|
||||
APPID = u"lothar.com/wormhole/text-or-file-xfer"
|
||||
VERIFY_TIMER = 1
|
||||
|
@ -52,11 +53,35 @@ class Sender:
|
|||
# with the user handing off the wormhole code
|
||||
yield self._tor_manager.start()
|
||||
|
||||
w = wormhole(self._args.appid or APPID, self._args.relay_url,
|
||||
self._reactor, self._tor_manager,
|
||||
timing=self._timing)
|
||||
wh = CLIWelcomeHandler(self._args.relay_url, __version__,
|
||||
self._args.stderr)
|
||||
w = create(self._args.appid or APPID, self._args.relay_url,
|
||||
self._reactor,
|
||||
tor_manager=self._tor_manager,
|
||||
timing=self._timing,
|
||||
welcome_handler=wh.handle_welcome)
|
||||
d = self._go(w)
|
||||
d.addBoth(w.close) # must wait for ack from close()
|
||||
|
||||
# if we succeed, we should close and return the w.close results
|
||||
# (which might be an error)
|
||||
@inlineCallbacks
|
||||
def _good(res):
|
||||
yield w.close() # wait for ack
|
||||
returnValue(res)
|
||||
|
||||
# if we raise an error, we should close and then return the original
|
||||
# error (the close might give us an error, but it isn't as important
|
||||
# as the original one)
|
||||
@inlineCallbacks
|
||||
def _bad(f):
|
||||
log.err(f)
|
||||
try:
|
||||
yield w.close() # might be an error too
|
||||
except:
|
||||
pass
|
||||
returnValue(f)
|
||||
|
||||
d.addCallbacks(_good, _bad)
|
||||
yield d
|
||||
|
||||
def _send_data(self, data, w):
|
||||
|
@ -83,40 +108,44 @@ class Sender:
|
|||
|
||||
if args.code:
|
||||
w.set_code(args.code)
|
||||
code = args.code
|
||||
else:
|
||||
code = yield w.get_code(args.code_length)
|
||||
w.allocate_code(args.code_length)
|
||||
|
||||
code = yield w.when_code()
|
||||
if not args.zeromode:
|
||||
print(u"Wormhole code is: %s" % code, file=args.stderr)
|
||||
# flush stderr so the code is displayed immediately
|
||||
args.stderr.flush()
|
||||
print(u"", file=args.stderr)
|
||||
|
||||
yield w.establish_key()
|
||||
# We don't print a "waiting" message for when_key() here, even though
|
||||
# we do that in cmd_receive.py, because it's not at all surprising to
|
||||
# we waiting here for a long time. We'll sit in when_key() until the
|
||||
# receiver has typed in the code and their PAKE message makes it to
|
||||
# us.
|
||||
yield w.when_key()
|
||||
|
||||
# TODO: don't stall on w.verify() unless they want it
|
||||
def on_slow_connection():
|
||||
print(u"Key established, waiting for confirmation...",
|
||||
file=args.stderr)
|
||||
notify = self._reactor.callLater(VERIFY_TIMER, on_slow_connection)
|
||||
|
||||
# TODO: don't stall on w.verify() unless they want it
|
||||
try:
|
||||
verifier_bytes = yield w.verify() # this may raise WrongPasswordError
|
||||
# The usual sender-chooses-code sequence means the receiver's
|
||||
# PAKE should be followed immediately by their VERSION, so
|
||||
# w.when_verified() should fire right away. However if we're
|
||||
# using the offline-codes sequence, and the receiver typed in
|
||||
# their code first, and then they went offline, we might be
|
||||
# sitting here for a while, so printing the "waiting" message
|
||||
# seems like a good idea. It might even be appropriate to give up
|
||||
# after a while.
|
||||
verifier_bytes = yield w.when_verified() # might WrongPasswordError
|
||||
finally:
|
||||
if not notify.called:
|
||||
notify.cancel()
|
||||
|
||||
if args.verify:
|
||||
verifier = bytes_to_hexstr(verifier_bytes)
|
||||
while True:
|
||||
ok = six.moves.input("Verifier %s. ok? (yes/no): " % verifier)
|
||||
if ok.lower() == "yes":
|
||||
break
|
||||
if ok.lower() == "no":
|
||||
err = "sender rejected verification check, abandoned transfer"
|
||||
reject_data = dict_to_bytes({"error": err})
|
||||
w.send(reject_data)
|
||||
raise TransferError(err)
|
||||
self._check_verifier(w, verifier_bytes) # blocks, can TransferError
|
||||
|
||||
if self._fd_to_send:
|
||||
ts = TransitSender(args.transit_helper,
|
||||
|
@ -146,12 +175,13 @@ class Sender:
|
|||
|
||||
while True:
|
||||
try:
|
||||
them_d_bytes = yield w.get()
|
||||
them_d_bytes = yield w.when_received()
|
||||
except WormholeClosedError:
|
||||
if done:
|
||||
returnValue(None)
|
||||
raise TransferError("unexpected close")
|
||||
# TODO: get() fired, so now it's safe to use w.derive_key()
|
||||
# TODO: when_received() fired, so now it's safe to use
|
||||
# w.derive_key()
|
||||
them_d = bytes_to_dict(them_d_bytes)
|
||||
#print("GOT", them_d)
|
||||
recognized = False
|
||||
|
@ -171,6 +201,18 @@ class Sender:
|
|||
if not recognized:
|
||||
log.msg("unrecognized message %r" % (them_d,))
|
||||
|
||||
def _check_verifier(self, w, verifier_bytes):
|
||||
verifier = bytes_to_hexstr(verifier_bytes)
|
||||
while True:
|
||||
ok = six.moves.input("Verifier %s. ok? (yes/no): " % verifier)
|
||||
if ok.lower() == "yes":
|
||||
break
|
||||
if ok.lower() == "no":
|
||||
err = "sender rejected verification check, abandoned transfer"
|
||||
reject_data = dict_to_bytes({"error": err})
|
||||
w.send(reject_data)
|
||||
raise TransferError(err)
|
||||
|
||||
def _handle_transit(self, receiver_transit):
|
||||
ts = self._transit_sender
|
||||
ts.add_connection_hints(receiver_transit.get("hints-v1", []))
|
||||
|
|
24
src/wormhole/cli/welcome.py
Normal file
24
src/wormhole/cli/welcome.py
Normal file
|
@ -0,0 +1,24 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
import sys
|
||||
from ..wormhole import _WelcomeHandler
|
||||
|
||||
class CLIWelcomeHandler(_WelcomeHandler):
|
||||
def __init__(self, url, cli_version, stderr=sys.stderr):
|
||||
_WelcomeHandler.__init__(self, url, stderr)
|
||||
self._current_version = cli_version
|
||||
self._version_warning_displayed = False
|
||||
|
||||
def handle_welcome(self, welcome):
|
||||
# Only warn if we're running a release version (e.g. 0.0.6, not
|
||||
# 0.0.6+DISTANCE.gHASH). Only warn once.
|
||||
if ("current_cli_version" in welcome
|
||||
and "+" not in self._current_version
|
||||
and not self._version_warning_displayed
|
||||
and welcome["current_cli_version"] != self._current_version):
|
||||
print("Warning: errors may occur unless both sides are running the same version", file=self.stderr)
|
||||
print("Server claims %s is current, but ours is %s"
|
||||
% (welcome["current_cli_version"], self._current_version),
|
||||
file=self.stderr)
|
||||
self._version_warning_displayed = True
|
||||
_WelcomeHandler.handle_welcome(self, welcome)
|
||||
|
|
@ -1,33 +1,25 @@
|
|||
from __future__ import unicode_literals
|
||||
import functools
|
||||
|
||||
class ServerError(Exception):
|
||||
def __init__(self, message, relay):
|
||||
self.message = message
|
||||
self.relay = relay
|
||||
def __str__(self):
|
||||
return self.message
|
||||
class WormholeError(Exception):
|
||||
"""Parent class for all wormhole-related errors"""
|
||||
|
||||
def handle_server_error(func):
|
||||
@functools.wraps(func)
|
||||
def _wrap(*args, **kwargs):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except ServerError as e:
|
||||
print("Server error (from %s):\n%s" % (e.relay, e.message))
|
||||
return 1
|
||||
return _wrap
|
||||
class ServerError(WormholeError):
|
||||
"""The relay server complained about something we did."""
|
||||
|
||||
class Timeout(Exception):
|
||||
class Timeout(WormholeError):
|
||||
pass
|
||||
|
||||
class WelcomeError(Exception):
|
||||
class WelcomeError(WormholeError):
|
||||
"""
|
||||
The relay server told us to signal an error, probably because our version
|
||||
is too old to possibly work. The server said:"""
|
||||
pass
|
||||
|
||||
class WrongPasswordError(Exception):
|
||||
class LonelyError(WormholeError):
|
||||
"""wormhole.close() was called before the peer connection could be
|
||||
established"""
|
||||
|
||||
class WrongPasswordError(WormholeError):
|
||||
"""
|
||||
Key confirmation failed. Either you or your correspondent typed the code
|
||||
wrong, or a would-be man-in-the-middle attacker guessed incorrectly. You
|
||||
|
@ -37,24 +29,54 @@ class WrongPasswordError(Exception):
|
|||
# or the data blob was corrupted, and that's why decrypt failed
|
||||
pass
|
||||
|
||||
class KeyFormatError(Exception):
|
||||
class KeyFormatError(WormholeError):
|
||||
"""
|
||||
The key you entered contains spaces. Magic-wormhole expects keys to be
|
||||
separated by dashes. Please reenter the key you were given separating the
|
||||
words with dashes.
|
||||
The key you entered contains spaces or was missing a dash. Magic-wormhole
|
||||
expects the numerical nameplate and the code words to be separated by
|
||||
dashes. Please reenter the key you were given separating the words with
|
||||
dashes.
|
||||
"""
|
||||
|
||||
class ReflectionAttack(Exception):
|
||||
class ReflectionAttack(WormholeError):
|
||||
"""An attacker (or bug) reflected our outgoing message back to us."""
|
||||
|
||||
class InternalError(Exception):
|
||||
class InternalError(WormholeError):
|
||||
"""The programmer did something wrong."""
|
||||
|
||||
class WormholeClosedError(InternalError):
|
||||
"""API calls may not be made after close() is called."""
|
||||
|
||||
class TransferError(Exception):
|
||||
class TransferError(WormholeError):
|
||||
"""Something bad happened and the transfer failed."""
|
||||
|
||||
class NoTorError(Exception):
|
||||
class NoTorError(WormholeError):
|
||||
"""--tor was requested, but 'txtorcon' is not installed."""
|
||||
|
||||
class NoKeyError(WormholeError):
|
||||
"""w.derive_key() was called before got_verifier() fired"""
|
||||
|
||||
class OnlyOneCodeError(WormholeError):
|
||||
"""Only one w.generate_code/w.set_code/w.input_code may be called"""
|
||||
|
||||
class MustChooseNameplateFirstError(WormholeError):
|
||||
"""The InputHelper was asked to do get_word_completions() or
|
||||
choose_words() before the nameplate was chosen."""
|
||||
class AlreadyChoseNameplateError(WormholeError):
|
||||
"""The InputHelper was asked to do get_nameplate_completions() after
|
||||
choose_nameplate() was called, or choose_nameplate() was called a second
|
||||
time."""
|
||||
class AlreadyChoseWordsError(WormholeError):
|
||||
"""The InputHelper was asked to do get_word_completions() after
|
||||
choose_words() was called, or choose_words() was called a second time."""
|
||||
class AlreadyInputNameplateError(WormholeError):
|
||||
"""The CodeInputter was asked to do completion on a nameplate, when we
|
||||
had already committed to a different one."""
|
||||
class WormholeClosed(Exception):
|
||||
"""Deferred-returning API calls errback with WormholeClosed if the
|
||||
wormhole was already closed, or if it closes before a real result can be
|
||||
obtained."""
|
||||
|
||||
class _UnknownPhaseError(Exception):
|
||||
"""internal exception type, for tests."""
|
||||
class _UnknownMessageTypeError(Exception):
|
||||
"""internal exception type, for tests."""
|
||||
|
|
38
src/wormhole/journal.py
Normal file
38
src/wormhole/journal.py
Normal file
|
@ -0,0 +1,38 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from zope.interface import implementer
|
||||
import contextlib
|
||||
from ._interfaces import IJournal
|
||||
|
||||
@implementer(IJournal)
|
||||
class Journal(object):
|
||||
def __init__(self, save_checkpoint):
|
||||
self._save_checkpoint = save_checkpoint
|
||||
self._outbound_queue = []
|
||||
self._processing = False
|
||||
|
||||
def queue_outbound(self, fn, *args, **kwargs):
|
||||
assert self._processing
|
||||
self._outbound_queue.append((fn, args, kwargs))
|
||||
|
||||
@contextlib.contextmanager
|
||||
def process(self):
|
||||
assert not self._processing
|
||||
assert not self._outbound_queue
|
||||
self._processing = True
|
||||
yield # process inbound messages, change state, queue outbound
|
||||
self._save_checkpoint()
|
||||
for (fn, args, kwargs) in self._outbound_queue:
|
||||
fn(*args, **kwargs)
|
||||
self._outbound_queue[:] = []
|
||||
self._processing = False
|
||||
|
||||
|
||||
@implementer(IJournal)
|
||||
class ImmediateJournal(object):
|
||||
def __init__(self):
|
||||
pass
|
||||
def queue_outbound(self, fn, *args, **kwargs):
|
||||
fn(*args, **kwargs)
|
||||
@contextlib.contextmanager
|
||||
def process(self):
|
||||
yield
|
|
@ -1,6 +1,6 @@
|
|||
# no unicode_literals untill twisted update
|
||||
from twisted.application import service
|
||||
from twisted.internet import defer, task
|
||||
from twisted.internet import defer, task, reactor
|
||||
from twisted.python import log
|
||||
from click.testing import CliRunner
|
||||
import mock
|
||||
|
@ -84,3 +84,17 @@ def config(*argv):
|
|||
cfg = go.call_args[0][1]
|
||||
return cfg
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def poll_until(predicate):
|
||||
# return a Deferred that won't fire until the predicate is True
|
||||
while not predicate():
|
||||
d = defer.Deferred()
|
||||
reactor.callLater(0.001, d.callback, None)
|
||||
yield d
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def pause_one_tick():
|
||||
# return a Deferred that won't fire until at least the next reactor tick
|
||||
d = defer.Deferred()
|
||||
reactor.callLater(0.001, d.callback, None)
|
||||
yield d
|
||||
|
|
|
@ -6,10 +6,10 @@ from twisted.trial import unittest
|
|||
from twisted.python import procutils, log
|
||||
from twisted.internet import defer, endpoints, reactor
|
||||
from twisted.internet.utils import getProcessOutputAndValue
|
||||
from twisted.internet.defer import gatherResults, inlineCallbacks
|
||||
from twisted.internet.defer import gatherResults, inlineCallbacks, returnValue
|
||||
from .. import __version__
|
||||
from .common import ServerBase, config
|
||||
from ..cli import cmd_send, cmd_receive
|
||||
from ..cli import cmd_send, cmd_receive, welcome
|
||||
from ..errors import TransferError, WrongPasswordError, WelcomeError
|
||||
|
||||
|
||||
|
@ -141,6 +141,45 @@ class OfferData(unittest.TestCase):
|
|||
self.assertEqual(str(e),
|
||||
"'%s' is neither file nor directory" % filename)
|
||||
|
||||
class LocaleFinder:
|
||||
def __init__(self):
|
||||
self._run_once = False
|
||||
|
||||
@inlineCallbacks
|
||||
def find_utf8_locale(self):
|
||||
if self._run_once:
|
||||
returnValue(self._best_locale)
|
||||
self._best_locale = yield self._find_utf8_locale()
|
||||
self._run_once = True
|
||||
returnValue(self._best_locale)
|
||||
|
||||
@inlineCallbacks
|
||||
def _find_utf8_locale(self):
|
||||
# Click really wants to be running under a unicode-capable locale,
|
||||
# especially on python3. macOS has en-US.UTF-8 but not C.UTF-8, and
|
||||
# most linux boxes have C.UTF-8 but not en-US.UTF-8 . For tests,
|
||||
# figure out which one is present and use that. For runtime, it's a
|
||||
# mess, as really the user must take responsibility for setting their
|
||||
# locale properly. I'm thinking of abandoning Click and going back to
|
||||
# twisted.python.usage to avoid this problem in the future.
|
||||
(out, err, rc) = yield getProcessOutputAndValue("locale", ["-a"])
|
||||
if rc != 0:
|
||||
log.msg("error running 'locale -a', rc=%s" % (rc,))
|
||||
log.msg("stderr: %s" % (err,))
|
||||
returnValue(None)
|
||||
out = out.decode("utf-8") # make sure we get a string
|
||||
utf8_locales = {}
|
||||
for locale in out.splitlines():
|
||||
locale = locale.strip()
|
||||
if locale.lower().endswith((".utf-8", ".utf8")):
|
||||
utf8_locales[locale.lower()] = locale
|
||||
for wanted in ["C.utf8", "C.UTF-8", "en_US.utf8", "en_US.UTF-8"]:
|
||||
if wanted.lower() in utf8_locales:
|
||||
returnValue(utf8_locales[wanted.lower()])
|
||||
if utf8_locales:
|
||||
returnValue(list(utf8_locales.values())[0])
|
||||
returnValue(None)
|
||||
locale_finder = LocaleFinder()
|
||||
|
||||
class ScriptsBase:
|
||||
def find_executable(self):
|
||||
|
@ -159,6 +198,7 @@ class ScriptsBase:
|
|||
% (wormhole, sys.executable))
|
||||
return wormhole
|
||||
|
||||
@inlineCallbacks
|
||||
def is_runnable(self):
|
||||
# One property of Versioneer is that many changes to the source tree
|
||||
# (making a commit, dirtying a previously-clean tree) will change the
|
||||
|
@ -175,21 +215,22 @@ class ScriptsBase:
|
|||
# Setting LANG/LC_ALL to a unicode-capable locale is necessary to
|
||||
# convince Click to not complain about a forced-ascii locale. My
|
||||
# apologies to folks who want to run tests on a machine that doesn't
|
||||
# have the en_US.UTF-8 locale installed.
|
||||
# have the C.UTF-8 locale installed.
|
||||
locale = yield locale_finder.find_utf8_locale()
|
||||
if not locale:
|
||||
raise unittest.SkipTest("unable to find UTF-8 locale")
|
||||
locale_env = dict(LC_ALL=locale, LANG=locale)
|
||||
wormhole = self.find_executable()
|
||||
d = getProcessOutputAndValue(wormhole, ["--version"],
|
||||
env=dict(LC_ALL="en_US.UTF-8",
|
||||
LANG="en_US.UTF-8"))
|
||||
def _check(res):
|
||||
out, err, rc = res
|
||||
if rc != 0:
|
||||
log.msg("wormhole not runnable in this tree:")
|
||||
log.msg("out", out)
|
||||
log.msg("err", err)
|
||||
log.msg("rc", rc)
|
||||
raise unittest.SkipTest("wormhole is not runnable in this tree")
|
||||
d.addCallback(_check)
|
||||
return d
|
||||
res = yield getProcessOutputAndValue(wormhole, ["--version"],
|
||||
env=locale_env)
|
||||
out, err, rc = res
|
||||
if rc != 0:
|
||||
log.msg("wormhole not runnable in this tree:")
|
||||
log.msg("out", out)
|
||||
log.msg("err", err)
|
||||
log.msg("rc", rc)
|
||||
raise unittest.SkipTest("wormhole is not runnable in this tree")
|
||||
returnValue(locale_env)
|
||||
|
||||
class ScriptVersion(ServerBase, ScriptsBase, unittest.TestCase):
|
||||
# we need Twisted to run the server, but we run the sender and receiver
|
||||
|
@ -204,7 +245,8 @@ class ScriptVersion(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
wormhole = self.find_executable()
|
||||
# we must pass on the environment so that "something" doesn't
|
||||
# get sad about UTF8 vs. ascii encodings
|
||||
out, err, rc = yield getProcessOutputAndValue(wormhole, ["--version"], env=os.environ)
|
||||
out, err, rc = yield getProcessOutputAndValue(wormhole, ["--version"],
|
||||
env=os.environ)
|
||||
err = err.decode("utf-8")
|
||||
if "DistributionNotFound" in err:
|
||||
log.msg("stderr was %s" % err)
|
||||
|
@ -230,16 +272,17 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
# we need Twisted to run the server, but we run the sender and receiver
|
||||
# with deferToThread()
|
||||
|
||||
@inlineCallbacks
|
||||
def setUp(self):
|
||||
d = self.is_runnable()
|
||||
d.addCallback(lambda _: ServerBase.setUp(self))
|
||||
return d
|
||||
self._env = yield self.is_runnable()
|
||||
yield ServerBase.setUp(self)
|
||||
|
||||
@inlineCallbacks
|
||||
def _do_test(self, as_subprocess=False,
|
||||
mode="text", addslash=False, override_filename=False,
|
||||
fake_tor=False, overwrite=False, mock_accept=False):
|
||||
assert mode in ("text", "file", "empty-file", "directory", "slow-text")
|
||||
assert mode in ("text", "file", "empty-file", "directory",
|
||||
"slow-text", "slow-sender-text")
|
||||
if fake_tor:
|
||||
assert not as_subprocess
|
||||
send_cfg = config("send")
|
||||
|
@ -260,7 +303,7 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
receive_dir = self.mktemp()
|
||||
os.mkdir(receive_dir)
|
||||
|
||||
if mode in ("text", "slow-text"):
|
||||
if mode in ("text", "slow-text", "slow-sender-text"):
|
||||
send_cfg.text = message
|
||||
|
||||
elif mode in ("file", "empty-file"):
|
||||
|
@ -335,7 +378,7 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
send_d = getProcessOutputAndValue(
|
||||
wormhole_bin, send_args,
|
||||
path=send_dir,
|
||||
env=dict(LC_ALL="en_US.UTF-8", LANG="en_US.UTF-8"),
|
||||
env=self._env,
|
||||
)
|
||||
recv_args = [
|
||||
'--relay-url', self.relayurl,
|
||||
|
@ -351,7 +394,7 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
receive_d = getProcessOutputAndValue(
|
||||
wormhole_bin, recv_args,
|
||||
path=receive_dir,
|
||||
env=dict(LC_ALL="en_US.UTF-8", LANG="en_US.UTF-8"),
|
||||
env=self._env,
|
||||
)
|
||||
|
||||
(send_res, receive_res) = yield gatherResults([send_d, receive_d],
|
||||
|
@ -386,20 +429,22 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
) as mrx_tm:
|
||||
receive_d = cmd_receive.receive(recv_cfg)
|
||||
else:
|
||||
send_d = cmd_send.send(send_cfg)
|
||||
receive_d = cmd_receive.receive(recv_cfg)
|
||||
KEY_TIMER = 0 if mode == "slow-sender-text" else 1.0
|
||||
with mock.patch.object(cmd_receive, "KEY_TIMER", KEY_TIMER):
|
||||
send_d = cmd_send.send(send_cfg)
|
||||
receive_d = cmd_receive.receive(recv_cfg)
|
||||
|
||||
# The sender might fail, leaving the receiver hanging, or vice
|
||||
# versa. Make sure we don't wait on one side exclusively
|
||||
if mode == "slow-text":
|
||||
with mock.patch.object(cmd_send, "VERIFY_TIMER", 0), \
|
||||
mock.patch.object(cmd_receive, "VERIFY_TIMER", 0):
|
||||
yield gatherResults([send_d, receive_d], True)
|
||||
elif mock_accept:
|
||||
with mock.patch.object(cmd_receive.six.moves, 'input', return_value='y'):
|
||||
yield gatherResults([send_d, receive_d], True)
|
||||
else:
|
||||
yield gatherResults([send_d, receive_d], True)
|
||||
VERIFY_TIMER = 0 if mode == "slow-text" else 1.0
|
||||
with mock.patch.object(cmd_receive, "VERIFY_TIMER", VERIFY_TIMER):
|
||||
with mock.patch.object(cmd_send, "VERIFY_TIMER", VERIFY_TIMER):
|
||||
if mock_accept:
|
||||
with mock.patch.object(cmd_receive.six.moves,
|
||||
'input', return_value='y'):
|
||||
yield gatherResults([send_d, receive_d], True)
|
||||
else:
|
||||
yield gatherResults([send_d, receive_d], True)
|
||||
|
||||
if fake_tor:
|
||||
expected_endpoints = [("127.0.0.1", self.relayport)]
|
||||
|
@ -470,9 +515,14 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
.format(NL=NL), send_stderr)
|
||||
|
||||
# check receiver
|
||||
if mode == "text" or mode == "slow-text":
|
||||
if mode in ("text", "slow-text", "slow-sender-text"):
|
||||
self.assertEqual(receive_stdout, message+NL)
|
||||
self.assertEqual(receive_stderr, key_established)
|
||||
if mode == "text":
|
||||
self.assertEqual(receive_stderr, "")
|
||||
elif mode == "slow-text":
|
||||
self.assertEqual(receive_stderr, key_established)
|
||||
elif mode == "slow-sender-text":
|
||||
self.assertEqual(receive_stderr, "Waiting for sender...\n")
|
||||
elif mode == "file":
|
||||
self.failUnlessEqual(receive_stdout, "")
|
||||
self.failUnlessIn("Receiving file ({size:s}) into: {name}"
|
||||
|
@ -536,6 +586,8 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
|
||||
def test_slow_text(self):
|
||||
return self._do_test(mode="slow-text")
|
||||
def test_slow_sender_text(self):
|
||||
return self._do_test(mode="slow-sender-text")
|
||||
|
||||
@inlineCallbacks
|
||||
def _do_test_fail(self, mode, failmode):
|
||||
|
@ -682,6 +734,7 @@ class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase):
|
|||
|
||||
# check server stats
|
||||
self._rendezvous.get_stats()
|
||||
self.flushLoggedErrors(TransferError)
|
||||
|
||||
def test_fail_file_noclobber(self):
|
||||
return self._do_test_fail("file", "noclobber")
|
||||
|
@ -711,6 +764,9 @@ class NotWelcome(ServerBase, unittest.TestCase):
|
|||
send_d = cmd_send.send(self.cfg)
|
||||
f = yield self.assertFailure(send_d, WelcomeError)
|
||||
self.assertEqual(str(f), "please upgrade XYZ")
|
||||
# TODO: this comes from log.err() in cmd_send.Sender.go._bad, and I'm
|
||||
# undecided about whether that ought to be doing log.err or not
|
||||
self.flushLoggedErrors(WelcomeError)
|
||||
|
||||
@inlineCallbacks
|
||||
def test_receiver(self):
|
||||
|
@ -719,7 +775,7 @@ class NotWelcome(ServerBase, unittest.TestCase):
|
|||
receive_d = cmd_receive.receive(self.cfg)
|
||||
f = yield self.assertFailure(receive_d, WelcomeError)
|
||||
self.assertEqual(str(f), "please upgrade XYZ")
|
||||
|
||||
self.flushLoggedErrors(WelcomeError)
|
||||
|
||||
class Cleanup(ServerBase, unittest.TestCase):
|
||||
|
||||
|
@ -841,3 +897,44 @@ class AppID(ServerBase, unittest.TestCase):
|
|||
).fetchall()
|
||||
self.assertEqual(len(used), 1, used)
|
||||
self.assertEqual(used[0]["app_id"], u"appid2")
|
||||
|
||||
class Welcome(unittest.TestCase):
|
||||
def do(self, welcome_message, my_version="2.0", twice=False):
|
||||
stderr = io.StringIO()
|
||||
h = welcome.CLIWelcomeHandler("url", my_version, stderr)
|
||||
h.handle_welcome(welcome_message)
|
||||
if twice:
|
||||
h.handle_welcome(welcome_message)
|
||||
return stderr.getvalue()
|
||||
|
||||
def test_empty(self):
|
||||
stderr = self.do({})
|
||||
self.assertEqual(stderr, "")
|
||||
|
||||
def test_version_current(self):
|
||||
stderr = self.do({"current_cli_version": "2.0"})
|
||||
self.assertEqual(stderr, "")
|
||||
|
||||
def test_version_old(self):
|
||||
stderr = self.do({"current_cli_version": "3.0"})
|
||||
expected = ("Warning: errors may occur unless both sides are running the same version\n" +
|
||||
"Server claims 3.0 is current, but ours is 2.0\n")
|
||||
self.assertEqual(stderr, expected)
|
||||
|
||||
def test_version_old_twice(self):
|
||||
stderr = self.do({"current_cli_version": "3.0"}, twice=True)
|
||||
# the handler should only emit the version warning once, even if we
|
||||
# get multiple Welcome messages (which could happen if we lose the
|
||||
# connection and then reconnect)
|
||||
expected = ("Warning: errors may occur unless both sides are running the same version\n" +
|
||||
"Server claims 3.0 is current, but ours is 2.0\n")
|
||||
self.assertEqual(stderr, expected)
|
||||
|
||||
def test_version_unreleased(self):
|
||||
stderr = self.do({"current_cli_version": "3.0"},
|
||||
my_version="2.5+middle.something")
|
||||
self.assertEqual(stderr, "")
|
||||
|
||||
def test_motd(self):
|
||||
stderr = self.do({"motd": "hello"})
|
||||
self.assertEqual(stderr, "Server (at url) says:\n hello\n")
|
28
src/wormhole/test/test_journal.py
Normal file
28
src/wormhole/test/test_journal.py
Normal file
|
@ -0,0 +1,28 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
from twisted.trial import unittest
|
||||
from .. import journal
|
||||
from .._interfaces import IJournal
|
||||
|
||||
class Journal(unittest.TestCase):
|
||||
def test_journal(self):
|
||||
events = []
|
||||
j = journal.Journal(lambda: events.append("checkpoint"))
|
||||
self.assert_(IJournal.providedBy(j))
|
||||
|
||||
with j.process():
|
||||
j.queue_outbound(events.append, "message1")
|
||||
j.queue_outbound(events.append, "message2")
|
||||
self.assertEqual(events, [])
|
||||
self.assertEqual(events, ["checkpoint", "message1", "message2"])
|
||||
|
||||
def test_immediate(self):
|
||||
events = []
|
||||
j = journal.ImmediateJournal()
|
||||
self.assert_(IJournal.providedBy(j))
|
||||
|
||||
with j.process():
|
||||
j.queue_outbound(events.append, "message1")
|
||||
self.assertEqual(events, ["message1"])
|
||||
j.queue_outbound(events.append, "message2")
|
||||
self.assertEqual(events, ["message1", "message2"])
|
||||
self.assertEqual(events, ["message1", "message2"])
|
1385
src/wormhole/test/test_machines.py
Normal file
1385
src/wormhole/test/test_machines.py
Normal file
File diff suppressed because it is too large
Load Diff
365
src/wormhole/test/test_rlcompleter.py
Normal file
365
src/wormhole/test/test_rlcompleter.py
Normal file
|
@ -0,0 +1,365 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
import mock
|
||||
from itertools import count
|
||||
from twisted.trial import unittest
|
||||
from twisted.internet import reactor
|
||||
from twisted.internet.defer import inlineCallbacks
|
||||
from twisted.internet.threads import deferToThread
|
||||
from .._rlcompleter import (input_with_completion,
|
||||
_input_code_with_completion,
|
||||
CodeInputter, warn_readline)
|
||||
from ..errors import KeyFormatError, AlreadyInputNameplateError
|
||||
APPID = "appid"
|
||||
|
||||
class Input(unittest.TestCase):
|
||||
@inlineCallbacks
|
||||
def test_wrapper(self):
|
||||
helper = object()
|
||||
trueish = object()
|
||||
with mock.patch("wormhole._rlcompleter._input_code_with_completion",
|
||||
return_value=trueish) as m:
|
||||
used_completion = yield input_with_completion("prompt:", helper,
|
||||
reactor)
|
||||
self.assertIs(used_completion, trueish)
|
||||
self.assertEqual(m.mock_calls,
|
||||
[mock.call("prompt:", helper, reactor)])
|
||||
# note: if this test fails, the warn_readline() message will probably
|
||||
# get written to stderr
|
||||
|
||||
class Sync(unittest.TestCase):
|
||||
# exercise _input_code_with_completion, which uses the blocking builtin
|
||||
# "input()" function, hence _input_code_with_completion is usually in a
|
||||
# thread with deferToThread
|
||||
|
||||
@mock.patch("wormhole._rlcompleter.CodeInputter")
|
||||
@mock.patch("wormhole._rlcompleter.readline",
|
||||
__doc__="I am GNU readline")
|
||||
@mock.patch("wormhole._rlcompleter.input", return_value="code")
|
||||
def test_readline(self, input, readline, ci):
|
||||
c = mock.Mock(name="inhibit parenting")
|
||||
c.completer = object()
|
||||
trueish = object()
|
||||
c.used_completion = trueish
|
||||
ci.configure_mock(return_value=c)
|
||||
prompt = object()
|
||||
input_helper = object()
|
||||
reactor = object()
|
||||
used = _input_code_with_completion(prompt, input_helper, reactor)
|
||||
self.assertIs(used, trueish)
|
||||
self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)])
|
||||
self.assertEqual(c.mock_calls, [mock.call.finish("code")])
|
||||
self.assertEqual(input.mock_calls, [mock.call(prompt)])
|
||||
self.assertEqual(readline.mock_calls,
|
||||
[mock.call.parse_and_bind("tab: complete"),
|
||||
mock.call.set_completer(c.completer),
|
||||
mock.call.set_completer_delims(""),
|
||||
])
|
||||
|
||||
@mock.patch("wormhole._rlcompleter.CodeInputter")
|
||||
@mock.patch("wormhole._rlcompleter.readline")
|
||||
@mock.patch("wormhole._rlcompleter.input", return_value="code")
|
||||
def test_readline_no_docstring(self, input, readline, ci):
|
||||
del readline.__doc__ # when in doubt, it assumes GNU readline
|
||||
c = mock.Mock(name="inhibit parenting")
|
||||
c.completer = object()
|
||||
trueish = object()
|
||||
c.used_completion = trueish
|
||||
ci.configure_mock(return_value=c)
|
||||
prompt = object()
|
||||
input_helper = object()
|
||||
reactor = object()
|
||||
used = _input_code_with_completion(prompt, input_helper, reactor)
|
||||
self.assertIs(used, trueish)
|
||||
self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)])
|
||||
self.assertEqual(c.mock_calls, [mock.call.finish("code")])
|
||||
self.assertEqual(input.mock_calls, [mock.call(prompt)])
|
||||
self.assertEqual(readline.mock_calls,
|
||||
[mock.call.parse_and_bind("tab: complete"),
|
||||
mock.call.set_completer(c.completer),
|
||||
mock.call.set_completer_delims(""),
|
||||
])
|
||||
|
||||
@mock.patch("wormhole._rlcompleter.CodeInputter")
|
||||
@mock.patch("wormhole._rlcompleter.readline",
|
||||
__doc__="I am libedit")
|
||||
@mock.patch("wormhole._rlcompleter.input", return_value="code")
|
||||
def test_libedit(self, input, readline, ci):
|
||||
c = mock.Mock(name="inhibit parenting")
|
||||
c.completer = object()
|
||||
trueish = object()
|
||||
c.used_completion = trueish
|
||||
ci.configure_mock(return_value=c)
|
||||
prompt = object()
|
||||
input_helper = object()
|
||||
reactor = object()
|
||||
used = _input_code_with_completion(prompt, input_helper, reactor)
|
||||
self.assertIs(used, trueish)
|
||||
self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)])
|
||||
self.assertEqual(c.mock_calls, [mock.call.finish("code")])
|
||||
self.assertEqual(input.mock_calls, [mock.call(prompt)])
|
||||
self.assertEqual(readline.mock_calls,
|
||||
[mock.call.parse_and_bind("bind ^I rl_complete"),
|
||||
mock.call.set_completer(c.completer),
|
||||
mock.call.set_completer_delims(""),
|
||||
])
|
||||
|
||||
@mock.patch("wormhole._rlcompleter.CodeInputter")
|
||||
@mock.patch("wormhole._rlcompleter.readline", None)
|
||||
@mock.patch("wormhole._rlcompleter.input", return_value="code")
|
||||
def test_no_readline(self, input, ci):
|
||||
c = mock.Mock(name="inhibit parenting")
|
||||
c.completer = object()
|
||||
trueish = object()
|
||||
c.used_completion = trueish
|
||||
ci.configure_mock(return_value=c)
|
||||
prompt = object()
|
||||
input_helper = object()
|
||||
reactor = object()
|
||||
used = _input_code_with_completion(prompt, input_helper, reactor)
|
||||
self.assertIs(used, trueish)
|
||||
self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)])
|
||||
self.assertEqual(c.mock_calls, [mock.call.finish("code")])
|
||||
self.assertEqual(input.mock_calls, [mock.call(prompt)])
|
||||
|
||||
@mock.patch("wormhole._rlcompleter.CodeInputter")
|
||||
@mock.patch("wormhole._rlcompleter.readline", None)
|
||||
@mock.patch("wormhole._rlcompleter.input", return_value=b"code")
|
||||
def test_bytes(self, input, ci):
|
||||
c = mock.Mock(name="inhibit parenting")
|
||||
c.completer = object()
|
||||
trueish = object()
|
||||
c.used_completion = trueish
|
||||
ci.configure_mock(return_value=c)
|
||||
prompt = object()
|
||||
input_helper = object()
|
||||
reactor = object()
|
||||
used = _input_code_with_completion(prompt, input_helper, reactor)
|
||||
self.assertIs(used, trueish)
|
||||
self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)])
|
||||
self.assertEqual(c.mock_calls, [mock.call.finish(u"code")])
|
||||
self.assertEqual(input.mock_calls, [mock.call(prompt)])
|
||||
|
||||
def get_completions(c, prefix):
|
||||
completions = []
|
||||
for state in count(0):
|
||||
text = c.completer(prefix, state)
|
||||
if text is None:
|
||||
return completions
|
||||
completions.append(text)
|
||||
|
||||
class Completion(unittest.TestCase):
|
||||
def test_simple(self):
|
||||
# no actual completion
|
||||
helper = mock.Mock()
|
||||
c = CodeInputter(helper, "reactor")
|
||||
c.finish("1-code-ghost")
|
||||
self.assertFalse(c.used_completion)
|
||||
self.assertEqual(helper.mock_calls,
|
||||
[mock.call.choose_nameplate("1"),
|
||||
mock.call.choose_words("code-ghost")])
|
||||
|
||||
@mock.patch("wormhole._rlcompleter.readline",
|
||||
get_completion_type=mock.Mock(return_value=0))
|
||||
def test_call(self, readline):
|
||||
# check that it calls _commit_and_build_completions correctly
|
||||
helper = mock.Mock()
|
||||
c = CodeInputter(helper, "reactor")
|
||||
|
||||
# pretend nameplates: 1, 12, 34
|
||||
|
||||
# first call will be with "1"
|
||||
cabc = mock.Mock(return_value=["1", "12"])
|
||||
c._commit_and_build_completions = cabc
|
||||
|
||||
self.assertEqual(get_completions(c, "1"), ["1", "12"])
|
||||
self.assertEqual(cabc.mock_calls, [mock.call("1")])
|
||||
|
||||
# then "12"
|
||||
cabc.reset_mock()
|
||||
cabc.configure_mock(return_value=["12"])
|
||||
self.assertEqual(get_completions(c, "12"), ["12"])
|
||||
self.assertEqual(cabc.mock_calls, [mock.call("12")])
|
||||
|
||||
# now we have three "a" words: "and", "ark", "aaah!zombies!!"
|
||||
cabc.reset_mock()
|
||||
cabc.configure_mock(return_value=["aargh", "ark", "aaah!zombies!!"])
|
||||
self.assertEqual(get_completions(c, "12-a"),
|
||||
["aargh", "ark", "aaah!zombies!!"])
|
||||
self.assertEqual(cabc.mock_calls, [mock.call("12-a")])
|
||||
|
||||
cabc.reset_mock()
|
||||
cabc.configure_mock(return_value=["aargh", "aaah!zombies!!"])
|
||||
self.assertEqual(get_completions(c, "12-aa"),
|
||||
["aargh", "aaah!zombies!!"])
|
||||
self.assertEqual(cabc.mock_calls, [mock.call("12-aa")])
|
||||
|
||||
cabc.reset_mock()
|
||||
cabc.configure_mock(return_value=["aaah!zombies!!"])
|
||||
self.assertEqual(get_completions(c, "12-aaa"), ["aaah!zombies!!"])
|
||||
self.assertEqual(cabc.mock_calls, [mock.call("12-aaa")])
|
||||
|
||||
c.finish("1-code")
|
||||
self.assert_(c.used_completion)
|
||||
|
||||
def test_wrap_error(self):
|
||||
helper = mock.Mock()
|
||||
c = CodeInputter(helper, "reactor")
|
||||
c._wrapped_completer = mock.Mock(side_effect=ValueError("oops"))
|
||||
with mock.patch("wormhole._rlcompleter.traceback") as traceback:
|
||||
with mock.patch("wormhole._rlcompleter.print") as mock_print:
|
||||
with self.assertRaises(ValueError) as e:
|
||||
c.completer("text", 0)
|
||||
self.assertEqual(traceback.mock_calls, [mock.call.print_exc()])
|
||||
self.assertEqual(mock_print.mock_calls,
|
||||
[mock.call("completer exception: oops")])
|
||||
self.assertEqual(str(e.exception), "oops")
|
||||
|
||||
@inlineCallbacks
|
||||
def test_build_completions(self):
|
||||
rn = mock.Mock()
|
||||
# InputHelper.get_nameplate_completions returns just the suffixes
|
||||
gnc = mock.Mock() # get_nameplate_completions
|
||||
cn = mock.Mock() # choose_nameplate
|
||||
gwc = mock.Mock() # get_word_completions
|
||||
cw = mock.Mock() # choose_words
|
||||
helper = mock.Mock(refresh_nameplates=rn,
|
||||
get_nameplate_completions=gnc,
|
||||
choose_nameplate=cn,
|
||||
get_word_completions=gwc,
|
||||
choose_words=cw,
|
||||
)
|
||||
# this needs a real reactor, for blockingCallFromThread
|
||||
c = CodeInputter(helper, reactor)
|
||||
cabc = c._commit_and_build_completions
|
||||
|
||||
# in this test, we pretend that nameplates 1,12,34 are active.
|
||||
|
||||
# 43 TAB -> nothing (and refresh_nameplates)
|
||||
gnc.configure_mock(return_value=[])
|
||||
matches = yield deferToThread(cabc, "43")
|
||||
self.assertEqual(matches, [])
|
||||
self.assertEqual(rn.mock_calls, [mock.call()])
|
||||
self.assertEqual(gnc.mock_calls, [mock.call("43")])
|
||||
self.assertEqual(cn.mock_calls, [])
|
||||
rn.reset_mock()
|
||||
gnc.reset_mock()
|
||||
|
||||
# 1 TAB -> 1-, 12- (and refresh_nameplates)
|
||||
gnc.configure_mock(return_value=["1-", "12-"])
|
||||
matches = yield deferToThread(cabc, "1")
|
||||
self.assertEqual(matches, ["1-", "12-"])
|
||||
self.assertEqual(rn.mock_calls, [mock.call()])
|
||||
self.assertEqual(gnc.mock_calls, [mock.call("1")])
|
||||
self.assertEqual(cn.mock_calls, [])
|
||||
rn.reset_mock()
|
||||
gnc.reset_mock()
|
||||
|
||||
# 12 TAB -> 12- (and refresh_nameplates)
|
||||
# I wouldn't mind if it didn't refresh the nameplates here, but meh
|
||||
gnc.configure_mock(return_value=["12-"])
|
||||
matches = yield deferToThread(cabc, "12")
|
||||
self.assertEqual(matches, ["12-"])
|
||||
self.assertEqual(rn.mock_calls, [mock.call()])
|
||||
self.assertEqual(gnc.mock_calls, [mock.call("12")])
|
||||
self.assertEqual(cn.mock_calls, [])
|
||||
rn.reset_mock()
|
||||
gnc.reset_mock()
|
||||
|
||||
# 12- TAB -> 12- {all words} (claim, no refresh)
|
||||
gnc.configure_mock(return_value=["12-"])
|
||||
gwc.configure_mock(return_value=["and-", "ark-", "aaah!zombies!!-"])
|
||||
matches = yield deferToThread(cabc, "12-")
|
||||
self.assertEqual(matches, ["12-aaah!zombies!!-", "12-and-", "12-ark-"])
|
||||
self.assertEqual(rn.mock_calls, [])
|
||||
self.assertEqual(gnc.mock_calls, [])
|
||||
self.assertEqual(cn.mock_calls, [mock.call("12")])
|
||||
self.assertEqual(gwc.mock_calls, [mock.call("")])
|
||||
cn.reset_mock()
|
||||
gwc.reset_mock()
|
||||
|
||||
# TODO: another path with "3 TAB" then "34-an TAB", so the claim
|
||||
# happens in the second call (and it waits for the wordlist)
|
||||
|
||||
# 12-a TAB -> 12-and- 12-ark- 12-aaah!zombies!!-
|
||||
gnc.configure_mock(side_effect=ValueError)
|
||||
gwc.configure_mock(return_value=["and-", "ark-", "aaah!zombies!!-"])
|
||||
matches = yield deferToThread(cabc, "12-a")
|
||||
# matches are always sorted
|
||||
self.assertEqual(matches, ["12-aaah!zombies!!-", "12-and-", "12-ark-"])
|
||||
self.assertEqual(rn.mock_calls, [])
|
||||
self.assertEqual(gnc.mock_calls, [])
|
||||
self.assertEqual(cn.mock_calls, [])
|
||||
self.assertEqual(gwc.mock_calls, [mock.call("a")])
|
||||
gwc.reset_mock()
|
||||
|
||||
# 12-and-b TAB -> 12-and-bat 12-and-bet 12-and-but
|
||||
gnc.configure_mock(side_effect=ValueError)
|
||||
# wordlist knows the code length, so doesn't add hyphens to these
|
||||
gwc.configure_mock(return_value=["and-bat", "and-bet", "and-but"])
|
||||
matches = yield deferToThread(cabc, "12-and-b")
|
||||
self.assertEqual(matches, ["12-and-bat", "12-and-bet", "12-and-but"])
|
||||
self.assertEqual(rn.mock_calls, [])
|
||||
self.assertEqual(gnc.mock_calls, [])
|
||||
self.assertEqual(cn.mock_calls, [])
|
||||
self.assertEqual(gwc.mock_calls, [mock.call("and-b")])
|
||||
gwc.reset_mock()
|
||||
|
||||
c.finish("12-and-bat")
|
||||
self.assertEqual(cw.mock_calls, [mock.call("and-bat")])
|
||||
|
||||
def test_incomplete_code(self):
|
||||
helper = mock.Mock()
|
||||
c = CodeInputter(helper, "reactor")
|
||||
with self.assertRaises(KeyFormatError) as e:
|
||||
c.finish("1")
|
||||
self.assertEqual(str(e.exception), "incomplete wormhole code")
|
||||
|
||||
@inlineCallbacks
|
||||
def test_rollback_nameplate_during_completion(self):
|
||||
helper = mock.Mock()
|
||||
gwc = helper.get_word_completions = mock.Mock()
|
||||
gwc.configure_mock(return_value=["code", "court"])
|
||||
c = CodeInputter(helper, reactor)
|
||||
cabc = c._commit_and_build_completions
|
||||
matches = yield deferToThread(cabc, "1-co") # this commits us to 1-
|
||||
self.assertEqual(helper.mock_calls,
|
||||
[mock.call.choose_nameplate("1"),
|
||||
mock.call.when_wordlist_is_available(),
|
||||
mock.call.get_word_completions("co")])
|
||||
self.assertEqual(matches, ["1-code", "1-court"])
|
||||
helper.reset_mock()
|
||||
with self.assertRaises(AlreadyInputNameplateError) as e:
|
||||
yield deferToThread(cabc, "2-co")
|
||||
self.assertEqual(str(e.exception),
|
||||
"nameplate (1-) already entered, cannot go back")
|
||||
self.assertEqual(helper.mock_calls, [])
|
||||
|
||||
@inlineCallbacks
|
||||
def test_rollback_nameplate_during_finish(self):
|
||||
helper = mock.Mock()
|
||||
gwc = helper.get_word_completions = mock.Mock()
|
||||
gwc.configure_mock(return_value=["code", "court"])
|
||||
c = CodeInputter(helper, reactor)
|
||||
cabc = c._commit_and_build_completions
|
||||
matches = yield deferToThread(cabc, "1-co") # this commits us to 1-
|
||||
self.assertEqual(helper.mock_calls,
|
||||
[mock.call.choose_nameplate("1"),
|
||||
mock.call.when_wordlist_is_available(),
|
||||
mock.call.get_word_completions("co")])
|
||||
self.assertEqual(matches, ["1-code", "1-court"])
|
||||
helper.reset_mock()
|
||||
with self.assertRaises(AlreadyInputNameplateError) as e:
|
||||
c.finish("2-code")
|
||||
self.assertEqual(str(e.exception),
|
||||
"nameplate (1-) already entered, cannot go back")
|
||||
self.assertEqual(helper.mock_calls, [])
|
||||
|
||||
@mock.patch("wormhole._rlcompleter.stderr")
|
||||
def test_warn_readline(self, stderr):
|
||||
# there is no good way to test that this function gets used at the
|
||||
# right time, since it involves a reactor and a "system event
|
||||
# trigger", but let's at least make sure it's invocable
|
||||
warn_readline()
|
||||
expected ="\nCommand interrupted: please press Return to quit"
|
||||
self.assertEqual(stderr.mock_calls, [mock.call.write(expected),
|
||||
mock.call.write("\n")])
|
31
src/wormhole/test/test_wordlist.py
Normal file
31
src/wormhole/test/test_wordlist.py
Normal file
|
@ -0,0 +1,31 @@
|
|||
from __future__ import print_function, unicode_literals
|
||||
import mock
|
||||
from twisted.trial import unittest
|
||||
from .._wordlist import PGPWordList
|
||||
|
||||
class Completions(unittest.TestCase):
|
||||
def test_completions(self):
|
||||
wl = PGPWordList()
|
||||
gc = wl.get_completions
|
||||
self.assertEqual(gc("ar", 2), {"armistice-", "article-"})
|
||||
self.assertEqual(gc("armis", 2), {"armistice-"})
|
||||
self.assertEqual(gc("armistice", 2), {"armistice-"})
|
||||
lots = gc("armistice-", 2)
|
||||
self.assertEqual(len(lots), 256, lots)
|
||||
first = list(lots)[0]
|
||||
self.assert_(first.startswith("armistice-"), first)
|
||||
self.assertEqual(gc("armistice-ba", 2),
|
||||
{"armistice-baboon", "armistice-backfield",
|
||||
"armistice-backward", "armistice-banjo"})
|
||||
self.assertEqual(gc("armistice-ba", 3),
|
||||
{"armistice-baboon-", "armistice-backfield-",
|
||||
"armistice-backward-", "armistice-banjo-"})
|
||||
self.assertEqual(gc("armistice-baboon", 2), {"armistice-baboon"})
|
||||
self.assertEqual(gc("armistice-baboon", 3), {"armistice-baboon-"})
|
||||
self.assertEqual(gc("armistice-baboon", 4), {"armistice-baboon-"})
|
||||
|
||||
class Choose(unittest.TestCase):
|
||||
def test_choose_words(self):
|
||||
wl = PGPWordList()
|
||||
with mock.patch("os.urandom", side_effect=[b"\x04", b"\x10"]):
|
||||
self.assertEqual(wl.choose_words(2), "alkali-assume")
|
File diff suppressed because it is too large
Load Diff
|
@ -1,5 +1,7 @@
|
|||
from __future__ import print_function, absolute_import, unicode_literals
|
||||
import json, time
|
||||
from zope.interface import implementer
|
||||
from ._interfaces import ITiming
|
||||
|
||||
class Event:
|
||||
def __init__(self, name, when, **details):
|
||||
|
@ -33,6 +35,7 @@ class Event:
|
|||
else:
|
||||
self.finish()
|
||||
|
||||
@implementer(ITiming)
|
||||
class DebugTiming:
|
||||
def __init__(self):
|
||||
self._events = []
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
from __future__ import print_function, unicode_literals
|
||||
import sys, re
|
||||
import six
|
||||
from zope.interface import implementer
|
||||
from twisted.internet.defer import inlineCallbacks, returnValue
|
||||
from twisted.internet.error import ConnectError
|
||||
from twisted.internet.endpoints import clientFromString
|
||||
|
@ -14,9 +15,12 @@ except ImportError:
|
|||
TorClientEndpoint = None
|
||||
DEFAULT_VALUE = "DEFAULT_VALUE"
|
||||
import ipaddress
|
||||
from . import _interfaces
|
||||
from .timing import DebugTiming
|
||||
from .transit import allocate_tcp_port
|
||||
|
||||
|
||||
@implementer(_interfaces.ITorManager)
|
||||
class TorManager:
|
||||
def __init__(self, reactor, launch_tor=False, tor_control_port=None,
|
||||
timing=None, stderr=sys.stderr):
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,7 +1,7 @@
|
|||
import json
|
||||
from twisted.internet.defer import inlineCallbacks, returnValue
|
||||
|
||||
from .wormhole import wormhole
|
||||
from . import wormhole
|
||||
from .tor_manager import TorManager
|
||||
from .errors import NoTorError
|
||||
|
||||
|
@ -38,16 +38,17 @@ def receive(reactor, appid, relay_url, code,
|
|||
raise NoTorError()
|
||||
yield tm.start()
|
||||
|
||||
wh = wormhole(appid, relay_url, reactor, tor_manager=tm)
|
||||
wh = wormhole.create(appid, relay_url, reactor, tor_manager=tm)
|
||||
if code is None:
|
||||
code = yield wh.get_code()
|
||||
wh.allocate_code()
|
||||
code = yield wh.when_code()
|
||||
else:
|
||||
wh.set_code(code)
|
||||
# we'll call this no matter what, even if you passed in a code --
|
||||
# maybe it should be only in the 'if' block above?
|
||||
if on_code:
|
||||
on_code(code)
|
||||
data = yield wh.get()
|
||||
data = yield wh.when_received()
|
||||
data = json.loads(data.decode("utf-8"))
|
||||
offer = data.get('offer', None)
|
||||
if not offer:
|
||||
|
@ -100,9 +101,10 @@ def send(reactor, appid, relay_url, data, code,
|
|||
if not tm.tor_available():
|
||||
raise NoTorError()
|
||||
yield tm.start()
|
||||
wh = wormhole(appid, relay_url, reactor, tor_manager=tm)
|
||||
wh = wormhole.create(appid, relay_url, reactor, tor_manager=tm)
|
||||
if code is None:
|
||||
code = yield wh.get_code()
|
||||
wh.allocate_code()
|
||||
code = yield wh.when_code()
|
||||
else:
|
||||
wh.set_code(code)
|
||||
if on_code:
|
||||
|
@ -115,7 +117,7 @@ def send(reactor, appid, relay_url, data, code,
|
|||
}
|
||||
}).encode("utf-8")
|
||||
)
|
||||
data = yield wh.get()
|
||||
data = yield wh.when_received()
|
||||
data = json.loads(data.decode("utf-8"))
|
||||
answer = data.get('answer', None)
|
||||
yield wh.close()
|
||||
|
|
Loading…
Reference in New Issue
Block a user