One downside is that we keep the wormhole channel allocated longer (we
have to finish the file transfer before we can deallocate it, which
could take a while for large files). Maybe we can fix this in the
future.
Also clean up test_scripts.PregeneratedCode:
* fetch results from both sides at the same time
* only check rc when using a subprocess, since the direct call doesn't
use rc=0 anymore
* no need to cancel the other side's Deferred when one errors
* provide more information if stderr was non-empty
And provide a close() that can live at the end of a Deferred chain, so
callers can do d.addBoth(w.close).
I like auto-close-on-error in general, but I'm removing it so I can
clean up the error-handling pathways. It will probably come back later.
The constraint is that it must be possible to wait on the return
Deferred that close() gives you (to synchronize tests, or keep the CLI
program running long enough to deallocate the channel) even if something
else (and error handler) called close() earlier. This will require
either a OneShotObserverList, or keeping a "deallocated" Deferred around
in case more callers want to wait on it later.
If we're closing because of an error, we need to sleep through the old
error, to be able to wait for the "deallocated" message. This might want
to be different: maybe clear the error first, or store the errors in a
list and sleep until a second error happens.
These were split out to make the blocking- and twisted- based
implementations share some code, but now that we're down to just
Twisted, it's clearer to merge them back in.
Hitting Control-C (which sends SIGINT) while we're waiting in the
readline-based input_code() function didn't shut down the process
properly: the reactor would wait for the readline thread to exit, which
wouldn't happen until it finished getting a code, which requires the
user to hit Return. I haven't found a good way to force the thread to
exit, or to synthetically inject a newline into stdin. So my compromise
is to tell the user that they need to hit Return to finish interrupting
the command.
See the _warn_readline() function for a list of other potential
approaches.
After talking to Glyph, I came to my senses:
* There will be just one distribution, pip install "magic-wormhole",
which will provide both Twisted- and blocking- style libraries. (I was
looking at separate distributions for each, plus one for the server,
plus one for the common bits). This involves extra dependencies for
both (twisted folks will be stuck with 'requests', blocking folks will
be stuck with 'twisted'), but those aren't huge, and the the
simplicity is worth it. Blocking users don't have to know anything
about Twisted to use Wormhole.
* Giving up on multiple distributions means we no longer need multiple
packages. This makes the import names easier to work with (more
relative imports).
* This will allow us to provide the blocking flavor with crochet (which
lets you call twisted code from a blocking world), so we'll have just
one implementation for both environments, and can get WebSockets and
Transit in both.
* The server is now part of the main package, as is the test suite.
Fun fact: travis was completely broken while the split was in place (it
wasn't testing the right things).
Otherwise, running "pip wheel ." from the source directory will fail
(it chokes while trying to copy the pipe inside _trial_test/). Arguably
pip shouldn't be doing a full copy, but until/unless they change that,
this is an easy workaround.
This should speed up the protocol, since we don't have to wait for
acks (HTTP responses) unless we really want to. It also makes it easier
to have multiple messages in flight at once. The protocol is still
compatible with the old HTTP version (which is still used by the
blocking flavor), but requires an updated Rendezvous server that speaks
websockets.
set_code() no longer touches the network: it just stores the code and
channelid for later. We hold off doing 'claim' and 'watch' until we need
messages, triggered by get_verifier() or get_data() or send_data().
We check for error before sleeping, not just after waking. This makes it
possible to detect a WrongPasswordError in get_data() even if the other
side hasn't done a corresponding send_data(), as long as the other side
finished PAKE (and thus sent a CONFIRM message). The unit test was doing
just this, and was hanging.
Also, we now disable confirmation to exercise a verifier mismatch. An
upcoming implementation change can probably detect the mismatch too
early otherwise, and throw WrongPasswordError before the test can
compare the verifiers.
This allows the Wormhole setup path to be simpler: consistently doing a
claim() just before watch(), regardless of whether we allocated the
channelid (with get_code), or dictated it (with set_code or
from_serialized).
The websocket lives on a Resource of the main rendezvous web site, and
the websocket URL is derived from the main "relay_url", so there's no
extra port to allocate, and no extra service to shut down.
Use 'autobahn[twisted]' just to be sure (plain 'autobahn' worked fine
for py27, but maybe it's needed for py35 or something).
Autobahn is failing to do some conditional import and accidentally
depends upon pytrie (for some encrypted WAMP thing) when we didn't ask
for it (https://github.com/crossbario/autobahn-python/issues/604). This
commit also adds a manual dependency on pytrie (which is pretty small)
until the upstream bug is fixed.
Deliver not-yet-JSONed objects to listeners (both in broadcast_message
and as the "catch-up" responses to add_listener). Also make the (web)
frontend responsible for adding "sent" timestamps. This all makes
rendezvous.py less web-centric.