Since input_code() sets the nameplate before setting the rest of the code,
and since the sender's PAKE will arrive as soon as the nameplate is set, we
could got_pake before got_code, and Key wasn't prepared to handle that.
* finally wire up "application versions"
* remove when_verifier (which used to fire after key establishment, but
before the VERSION message was received or verified)
* fire when_verified and when_version at the same time (after VERSION is
verified), but with different args
Starting on defining manager state machines for nameplates, mailboxes, the
PAKE key-establishment process, and the bit that knows it can drop the
connection when both nameplates and mailboxes have been released.
The new TorManager adds --launch-tor and --tor-control-port= arguments
(requiring the user to explicitly request a new Tor process, if that's what
they want). The default (when --tor is enabled) looks for a control port in
the usual places (/var/run/tor/control, localhost:9051, localhost:9151), then
falls back to hoping there's a SOCKS port in the usual
place (localhost:9050). (closes#64)
The ssh utilities should now accept the same tor arguments as ordinary
send/receive commands. There are now full tests for TorManager, and basic
tests for how send/receive use it. (closes#97)
Note that Tor is only supported on python2.7 for now, since txsocksx (and
therefore txtorcon) doesn't work on py3. You need to do "pip install
magic-wormhole[tor]" to get Tor support, and that will get you an inscrutable
error on py3 (referencing vcversioner, "install_requires must be a string or
list of strings", and "int object not iterable").
To run tests, you must install with the [dev] extra (to get "mock" and other
libraries). Our setup.py only includes "txtorcon" in the [dev] extra when on
py2, not on py3. Unit tests tolerate the lack of txtorcon (they mock out
everything txtorcon would provide), so they should provide the same coverage
on both py2 and py3.
These point to the same host (same IP address) as before, but the new names
are tied to the project's official domain (magic-wormhole.io), rather than my
personal one, so they can be managed independently.
at least by the same side. This forces the contour of claims (by any given
side) to be strictly unclaimed -> claimed -> released. The "claim"
action (unclaimed -> claimed) is idempotent and can be repeated arbitrarily,
as long as they happen on separate websocket connections. Likewise for the
"release" action (unclaimed -> released). But once a side releases a
nameplate, it should never roll so far back that it tries to claim it again,
especially because the first claim causes a mailbox to be allocated, and if
we manage to allocate two different mailboxes for a single nameplate, then
we've thrown idempotency out the window.
and make it possible to call release() even though you haven't called claim()
on that particular socket (releasing a claim that was made on some previous
websocket).
This should enable reconnecting clients, as well as intermittently-connected
"offline" clients.
refs #118
This should leave stdout clean for use in `foo | wormhole send --text=-` and
`wormhole rx CODE >foo`, although the forms that want interactive code entry
probably won't work that way.
closes#99
* Previously, we only connected to the relay supplied by our partner, which
meant that if our relay differed from theirs, we'd never connect
* But we must de-duplicate the relays because when our relay *is* the same as
theirs, we'd have two copies, which means two connections. Now that we
deliver sided handshakes, we can tolerate that (previously, our two
connections would be matched with each other), but it's still wasteful.
This also fixes our handling of relay hints to accept multiple specific
endpoints in each RelayHint. The idea here is that we might know multiple
addresses for a single relay (maybe one IPv4, one IPv6, a Tor .onion, and an
I2P address). Any one connection is good enough, and the connections we can
try depend upon what local interfaces we discover. So a clever implementation
could refrain from making some of those connections when it knows the sibling
hints are just as good. However we might still have multiple relays entirely,
for which it is *not* sufficient to connect to just one.
The change is to create and process RelayV1Hint objects properly, and to set
the connection loop to try every endpoint inside each RelayV1Hint. This is
not "clever" (we could nominally make fewer connection attempts), but it's
plenty good for now.
refs #115
fix relay hints
Tools which use `wormhole send` under the hood should use a distinct
--appid= (setting the same URL-shaped value on both sides, starting with a
domain name related to the tool and/or its author), so wormhole codes used by
those tools won't compete for short channelids with other tools, or the
default text/file/directory-sending tool.
Closes#113
closes#91
Also tweaks an error message: don't say "refusing to clobber pre-existing
file FOO" when we don't check that it's actually a file. Just say "..
pre-existing 'FOO'".
there was a function to "abbreviate" sizes, but it was somewhat
unclear and incomplete. reuse the sizeof_fmt_* set of functions from
the borg backup project (MIT licensed) to implement a more complete
and flexible display that will scale up to the Yottabyte and
beyond. it also supports non-IEC units (like "kibibyte", AKA 1024
bytes) if you fancy that stuff.
this is a workaround for #91: it allows users to better see the size
of the file that will be transfered.
*some* places are still kept in bytes, most notably when receive fails
to receive all bytes ("got %d bytes, wanted %d") because we may want
more clarity there.
text transfers also use the "bytes" suffix (instead of "B") because it
will commonly not reach beyond the KiB range.
note that the test suite only covers decimal (non-IEC) prefix, but it
is assumed to be sufficient to be considered correct.
I think somebody was port-scanning the server (or pointed some
non-wormhole client at it), and caused some exceptions in the logs.
These are still bad handshakes, but should be logged normally instead of
throwing exceptions.
- move to 'wormhole ssh' group with accept/invite subcommands
- change names of methods
- check for permissions
- use --user option (instead of --auth-file)
- move implementation to cmd_ssh.py
- if multiple public-keys, ask user
Some of us can never remember the old ditty:
i before e, except after c
or when sounding like "a"
as in neighbor or weigh.
Perhaps magic wormhole can coddle us in our misorthography :)
So instead of "wormhole --verify send", use "wormhole send --verify".
The full set of arguments that were moved down:
* --code-length=
* --verify
* --hide-progress
* --no-listen
* --tor
The following remain as top-level arguments (which should appear after
"wormhole" and before the subcommand):
* --relay-url=
* --transit-helper=
* --dump-timing=
* --version
The values set by the base Config constructor could mask Click parsers
that weren't supplying defaults properly, or which were using different
defaults.
When tests need a Config object, they now call a function which invokes
Click with a mocked-out go() function, and grabs the Config object
before actually doing anything with it.
With this, both clients and servers will send a PING at least once every
minute, and will drop connections that haven't seen any traffic for 10
minutes.
This should help keep NAT table entries alive, and will drop connections
that are no longer viable because their NAT entries have expired.
closes#60
Without this, the sender drops the connection before the "close" message
has made it to the server, which leaves the mailbox hanging until it
expires. It still lives in a 'd.addBoth()' slot, so it gets closed even
if some error occurrs, but we wait for it's Deferred to fire in both
success and failure cases.
We already hard-code 'relay.sqlite', so I don't see a lot of value in
making making the stats file configurable too. That said, if it makes
life easier for packagers (e.g. start-stop-daemon or systemd wanting
these files to go into /var/run/something/ , and if it isn't sufficient
to just use /var/run/something/ as the CWD), I'd accept a patch to
add it back.
The DB queries this uses aren't particularly efficient, and when the
time it takes to run starts to become a problem, we should do an
optimization pass.