The new TorManager adds --launch-tor and --tor-control-port= arguments
(requiring the user to explicitly request a new Tor process, if that's what
they want). The default (when --tor is enabled) looks for a control port in
the usual places (/var/run/tor/control, localhost:9051, localhost:9151), then
falls back to hoping there's a SOCKS port in the usual
place (localhost:9050). (closes#64)
The ssh utilities should now accept the same tor arguments as ordinary
send/receive commands. There are now full tests for TorManager, and basic
tests for how send/receive use it. (closes#97)
Note that Tor is only supported on python2.7 for now, since txsocksx (and
therefore txtorcon) doesn't work on py3. You need to do "pip install
magic-wormhole[tor]" to get Tor support, and that will get you an inscrutable
error on py3 (referencing vcversioner, "install_requires must be a string or
list of strings", and "int object not iterable").
To run tests, you must install with the [dev] extra (to get "mock" and other
libraries). Our setup.py only includes "txtorcon" in the [dev] extra when on
py2, not on py3. Unit tests tolerate the lack of txtorcon (they mock out
everything txtorcon would provide), so they should provide the same coverage
on both py2 and py3.
These point to the same host (same IP address) as before, but the new names
are tied to the project's official domain (magic-wormhole.io), rather than my
personal one, so they can be managed independently.
at least by the same side. This forces the contour of claims (by any given
side) to be strictly unclaimed -> claimed -> released. The "claim"
action (unclaimed -> claimed) is idempotent and can be repeated arbitrarily,
as long as they happen on separate websocket connections. Likewise for the
"release" action (unclaimed -> released). But once a side releases a
nameplate, it should never roll so far back that it tries to claim it again,
especially because the first claim causes a mailbox to be allocated, and if
we manage to allocate two different mailboxes for a single nameplate, then
we've thrown idempotency out the window.
and make it possible to call release() even though you haven't called claim()
on that particular socket (releasing a claim that was made on some previous
websocket).
This should enable reconnecting clients, as well as intermittently-connected
"offline" clients.
refs #118
This should leave stdout clean for use in `foo | wormhole send --text=-` and
`wormhole rx CODE >foo`, although the forms that want interactive code entry
probably won't work that way.
closes#99
* Previously, we only connected to the relay supplied by our partner, which
meant that if our relay differed from theirs, we'd never connect
* But we must de-duplicate the relays because when our relay *is* the same as
theirs, we'd have two copies, which means two connections. Now that we
deliver sided handshakes, we can tolerate that (previously, our two
connections would be matched with each other), but it's still wasteful.
This also fixes our handling of relay hints to accept multiple specific
endpoints in each RelayHint. The idea here is that we might know multiple
addresses for a single relay (maybe one IPv4, one IPv6, a Tor .onion, and an
I2P address). Any one connection is good enough, and the connections we can
try depend upon what local interfaces we discover. So a clever implementation
could refrain from making some of those connections when it knows the sibling
hints are just as good. However we might still have multiple relays entirely,
for which it is *not* sufficient to connect to just one.
The change is to create and process RelayV1Hint objects properly, and to set
the connection loop to try every endpoint inside each RelayV1Hint. This is
not "clever" (we could nominally make fewer connection attempts), but it's
plenty good for now.
refs #115
fix relay hints
Tools which use `wormhole send` under the hood should use a distinct
--appid= (setting the same URL-shaped value on both sides, starting with a
domain name related to the tool and/or its author), so wormhole codes used by
those tools won't compete for short channelids with other tools, or the
default text/file/directory-sending tool.
Closes#113
closes#91
Also tweaks an error message: don't say "refusing to clobber pre-existing
file FOO" when we don't check that it's actually a file. Just say "..
pre-existing 'FOO'".
there was a function to "abbreviate" sizes, but it was somewhat
unclear and incomplete. reuse the sizeof_fmt_* set of functions from
the borg backup project (MIT licensed) to implement a more complete
and flexible display that will scale up to the Yottabyte and
beyond. it also supports non-IEC units (like "kibibyte", AKA 1024
bytes) if you fancy that stuff.
this is a workaround for #91: it allows users to better see the size
of the file that will be transfered.
*some* places are still kept in bytes, most notably when receive fails
to receive all bytes ("got %d bytes, wanted %d") because we may want
more clarity there.
text transfers also use the "bytes" suffix (instead of "B") because it
will commonly not reach beyond the KiB range.
note that the test suite only covers decimal (non-IEC) prefix, but it
is assumed to be sufficient to be considered correct.
I think somebody was port-scanning the server (or pointed some
non-wormhole client at it), and caused some exceptions in the logs.
These are still bad handshakes, but should be logged normally instead of
throwing exceptions.
- move to 'wormhole ssh' group with accept/invite subcommands
- change names of methods
- check for permissions
- use --user option (instead of --auth-file)
- move implementation to cmd_ssh.py
- if multiple public-keys, ask user
Some of us can never remember the old ditty:
i before e, except after c
or when sounding like "a"
as in neighbor or weigh.
Perhaps magic wormhole can coddle us in our misorthography :)
So instead of "wormhole --verify send", use "wormhole send --verify".
The full set of arguments that were moved down:
* --code-length=
* --verify
* --hide-progress
* --no-listen
* --tor
The following remain as top-level arguments (which should appear after
"wormhole" and before the subcommand):
* --relay-url=
* --transit-helper=
* --dump-timing=
* --version
The values set by the base Config constructor could mask Click parsers
that weren't supplying defaults properly, or which were using different
defaults.
When tests need a Config object, they now call a function which invokes
Click with a mocked-out go() function, and grabs the Config object
before actually doing anything with it.
With this, both clients and servers will send a PING at least once every
minute, and will drop connections that haven't seen any traffic for 10
minutes.
This should help keep NAT table entries alive, and will drop connections
that are no longer viable because their NAT entries have expired.
closes#60
Without this, the sender drops the connection before the "close" message
has made it to the server, which leaves the mailbox hanging until it
expires. It still lives in a 'd.addBoth()' slot, so it gets closed even
if some error occurrs, but we wait for it's Deferred to fire in both
success and failure cases.
We already hard-code 'relay.sqlite', so I don't see a lot of value in
making making the stats file configurable too. That said, if it makes
life easier for packagers (e.g. start-stop-daemon or systemd wanting
these files to go into /var/run/something/ , and if it isn't sufficient
to just use /var/run/something/ as the CWD), I'd accept a patch to
add it back.
The DB queries this uses aren't particularly efficient, and when the
time it takes to run starts to become a problem, we should do an
optimization pass.
This counts the number of "standalone" mailboxes we create, which
happens when a client does open() without first using a nameplate. The
current client doesn't do this, but future clients might.
This moves responsibility for the periodic prune-everything Timer up to
RelayServer too. That way we can be sure the stats are dumped
immediately after prune, and we can incorporate stats from Transit as
well.
The new approach runs every 10 minutes and keeps a
nameplate/mailbox/messages "channel" alive if the mailbox has been
updated within 11 minutes, or if there has been an attached listener
within that time.
Also remove the "nameplates.updated" column. Now we only track "updated"
timestamps on the "mailboxes" table, and a new mailbox will preserve any
attached nameplate.
Unless/until people start writing new applications (with different
app-ids), this code is unlikely to get used very much, and the code is
simpler without it.
I changed my mind, it's actually easier if 'wormhole-server stop' (and
'restart') does *not* throw an error when there wasn't already a server
running in that directory. Specifically that lets me use 'restart' as an
idempotent "make sure a server is running" command.
These weren't running because Click complained about an ASCII locale
when running under py3, which triggered an error check that was there to
detect broken virtualenvs, skipping those tests.
The fix appears to be to force the en_US.UTF-8 locale when running the
wormhole program in a subprocess.
This adds a test for database upgrades, which I developed on a branch
that added a new DB schema (v3) and an upgrader to match, but then I
changed my mind about the schema and removed that part. The test will be
useful some time in the future when I change the schema in a small
enough way that I bother to write an upgrader for the change. For now,
the test is disabled.
In addition, the upgrader test is kind of lame. I'd really prefer to
assert that the upgraded schema is identical to the schema of a
brand-new (latest-version) database, but ALTER TABLE doesn't quite work
that way (comments are omitted, and the order of the columns is slightly
different).
This also adds database.dump_db() for the tests.
There was some vestigal server-cli code (leftover in the client-side
wormhole.cli.cli_args) that used port 3000/3001, and it accidentally got
used for the new Click-based parser, rather than the actual server-cli
code (in wormhole.server.cli_args) that uses port 4000/4001. This
changes the port numbers to match (everything uses 4000/4001 these days,
to avoid confusing interactions with the old 0.7.6 server that might
still be listening on the old ports).
GNU libreadline, and the libedit-based library shipped on stock OS-X
python, require different key-binding syntaxes to enable tab completion.
The previous commit to fix this (0977ef0) added both binding commands
Unfortunately when GNU libreadline is given the libedit-style
command (i.e. "bind ^I rl_complete"), it binds the letter "b" to a
non-existent command "ind", or something, and as a result the letter "b"
doesn't work anymore.
This patch uses the readline docstring to sense which flavor is
installed, and only runs the one binding command that's appropriate.
refs #37
The appveyor tests were failing because their VMs only have 127.0.0.1,
and stripping it out resulted in an empty hint list, which meant Transit
couldn't work at all.
With increased usage, I'm seeing a buildup of stale channels. Since the
channels aren't properly ephemeral yet (where they get closed as soon as
the last subscriber disconnects), clients which terminate without
calling close() tend to leave the channel lying around. We don't have
"persistent wormholes" yet, so channels should be much more ephemeral
than they currently are.
Apple's stock python doesn't use GNU libreadline, instead it uses BSD
libedit with a readline compatibility interface. The syntax to enable
tab completion is different for libedit. By including both bindings,
autocomplete should work on both flavors.
Closes#37. Thanks to @wsanchez for the catch and the fix.
(one displayed message per received welcome["motd"])
There's not much value in prohibiting the server from sending multiple
MOTD messages, and it would prevent us from using it to display a "your
client is using an old API, please upgrade" message after having already
sent a regular "please donate" MOTD message. (We could send a second
welcome message with ["error"] to kill the client, but ["motd"] is the
most convenient way to deliver a non-fatal warning).
This is an alias for the same host, so it's not really an incompatible
change. The new hostname is my personal domain, and seems a bit more
suitable for this service.
The reasoning is that this string is only ever likely to refer to the
version of the primary/initial client (the CLI application, written in
Python, that you get with "pip install magic-wormhole"). When there are
other implementations, with unrelated versions, they should obviously
not pay attention to a warning about the other implementation being out
of date.
This gives us room in the future to put other keys there, like one which
says we want to use Noise for the phase-message encryption instead of
our current HKDF scheme.
This will be useful for the upcoming "persistent wormhole" mode. A
client might send an allocation request, crash/terminate before
receiving a response, then restart, then re-send the request. If the
server sees a request with the same request_id a previous request, it
can return the same nameplate.
We'll need code changes on both sides to support this (nothing sends or
checks request_id yet), but this lands the schema change early to reduce
future disruption.
This will allow a future peer to figure out what transit modes we can
and cannot do, and thus avoid spinning up expensive modes that we won't
be able to use (e.g. WebRTC).
This enhances the ACK that wormhole-receive returns when it finishes
receiving all the data to be a dictionary. The dict includes the SHA256
hash of everything it received, and the sender checks this for a match
before declaring the transfer to be a success. This guards against data
being shuffled somehow during transit.
This better reflects the purpose of the message. Key confirmation is a
side-effect.
This patch only changes the "phase:" name and the key-derivation string.
A subsequent patch will modify the function and variable names to match.
The file-send protocol now sends a "hints-v1" key in the "transit"
message, which contains a list of JSON data structures that describe the
connection hints (a mixture of direct, tor, and relay hints, for now).
Previously the direct/tor and relay hints were sent in different keys,
and all were sent as strings like "tcp:hostname:1234" which had to be
parsed by the recipient.
The new structures include a version string, to make it easier to add
new types in the future. Transit logs+ignores hints it cannot
understand.
In the future, both sides should expect to receive "transit" messages at
any time, and they will add to the list of hints that they should try.
For now, each side only sends a single transit message, before they send
the offer (sender) or answer (receiver).
This moves us slowly towards a file-transfer protocol that exchanges
multiple messages, with a single offer (sender->receiver) and
answer (receiver->sender), and one or more connection hint messages (in
either direction) that appear gradually over time as connection
providers come online.
At present the protocol still expects the whole hint list to be present
in the offer/answer message.
This should enable forwards-compatibility with clients which send extra
data, like a pre-PAKE "auxdata" message that hints we should spin up a
tor client (because they can connect to it) while we're waiting for the
user to type in the wormhole code.
Previously the encryption key used for "phase messages" (anything sent
from one side to the other, protected by the shared PAKE-generated
session key) was derived just from the session key and the phase name.
The two sides would use the same key for their first message (but with
random, thus different, nonces).
This uses the sending side's string (a random 5-byte/10-character hex
string) in the derivation process too, so the two sides use different
keys. This gives us an easy way to reject reflected messages. We already
ignore messages that claim to use a "side" which matches our own (to
ignore server echoes of our own outbound messages). With this change, an
attacker (or the server) can't swap in the payload of an outbound
message, change the "side" to make it look like a peer message, and then
let us decrypt it correctly.
It also changes the derivation function to combine the phase and side
values safely. This didn't matter much when we only had one
externally-provided string, but with two, there's an opportunity for
format confusion if they were combined with a simple delimiter. Now we
hash both values before concatenating them.
This breaks interoperability with clients from before this change. They
will always get WrongPasswordErrors.
* add "released" ack-response for "release" command, to sync w.close()
* move websocket URL to root
* relayurl= should now be a "ws://" URL
* many tests pass (except for test_twisted, which will be removed, and
test_scripts)
* still moving integration tests from test_twisted to
test_wormhole.Wormholes
This made sense for ServerSentEvent channels (which has no purpose once
the channel was gone), but not so much for websockets. And it prevented
testing duplicate-close.
Pass in a handle and a pair of functions, rather than an object with two
well-known methods. This should make it easier to subscribe to multiple
channels in the future.
but only if the client is modern enough to include "id" in the message,
which lets us avoid sending acks to an 0.7.5 client (which would cause
them to abort, they don't like unrecognized server messages).
The acks let the client learn the server_rx time of messages that
terminate on the server, like "allocate" and "claim".
This improves the error behavior when --verify is used but there's a
WrongPasswordError: the mismatch is detected before the verifiers are
displayed or confirmation is requested.
It requires that the far end sends a "_confirm" message, which was
introduced in release 0.6.0. Use with older versions (if it doesn't
break for other reasons) will cause a hang.
This patch also deletes test_twisted.Basic.test_verifier_mismatch, since
both sides now detect this on their own. It changes
test_wrong_password() too, since we might now notice the error during
send_data (previously we'd only see it in get_data).
One downside is that we keep the wormhole channel allocated longer (we
have to finish the file transfer before we can deallocate it, which
could take a while for large files). Maybe we can fix this in the
future.
Also clean up test_scripts.PregeneratedCode:
* fetch results from both sides at the same time
* only check rc when using a subprocess, since the direct call doesn't
use rc=0 anymore
* no need to cancel the other side's Deferred when one errors
* provide more information if stderr was non-empty
And provide a close() that can live at the end of a Deferred chain, so
callers can do d.addBoth(w.close).
I like auto-close-on-error in general, but I'm removing it so I can
clean up the error-handling pathways. It will probably come back later.
The constraint is that it must be possible to wait on the return
Deferred that close() gives you (to synchronize tests, or keep the CLI
program running long enough to deallocate the channel) even if something
else (and error handler) called close() earlier. This will require
either a OneShotObserverList, or keeping a "deallocated" Deferred around
in case more callers want to wait on it later.
If we're closing because of an error, we need to sleep through the old
error, to be able to wait for the "deallocated" message. This might want
to be different: maybe clear the error first, or store the errors in a
list and sleep until a second error happens.
These were split out to make the blocking- and twisted- based
implementations share some code, but now that we're down to just
Twisted, it's clearer to merge them back in.
Hitting Control-C (which sends SIGINT) while we're waiting in the
readline-based input_code() function didn't shut down the process
properly: the reactor would wait for the readline thread to exit, which
wouldn't happen until it finished getting a code, which requires the
user to hit Return. I haven't found a good way to force the thread to
exit, or to synthetically inject a newline into stdin. So my compromise
is to tell the user that they need to hit Return to finish interrupting
the command.
See the _warn_readline() function for a list of other potential
approaches.
This uses a single TCP connection to the relay server for all
requests (although it probably uses a second one for the downstream
EventSource feed). This should squeeze out some of the round-trip times.
This adds an expected= argument to Connection.connectConsumer(), which
then returns a Deferred that fires when enough bytes have been written
to the consumer. It also adds Connection.writeToFile(), a helper method
that writes bytes to a filehandle.
I made the classic dataReceived() mistake, and exited the function after
delivering the first record. Keep at it until there are no complete
records left.
The previous commits improve test failures by dropping relay connections
at shutdown, and flunking a test quickly when one client fails but the
other one hangs.
If that doesn't work (say, some client has a time.sleep(), or other
stall that isn't affected by the relay shutdown), we'll be left with an
active thread holding that hanging client.
This patch adds a check to wormhole.test.common.ServerBase.tearDown that
looks for active threads, waits a second (after stopService), then
checks the threadpool again. If the threadpool is empty, everything is
fine. If not, it prints a message (to stdout) to inform the impatient
user why the test is probably hanging.
When test_scripts ran two clients at the same time, an error in one
could leave the other hanging (in a thread). One Deferred would errback,
the other would hang. Tests wait on one Deferred at a time, so if we're
unlucky and were waiting on the hanging Deferred (instead of the
erroring one), we'll wait forever, or at least until the default test
timeout of 180 seconds.
This adds an errback to notice when either client has errored, and
cancels the other Deferred, so it doesn't matter which one we wait upon
first.
'readline' is part of the python stdlib, so declaring a dependency on it
doesn't help. It doesn't exist on windows, and the pypi 'readline'
module doesn't work on windows. So instead, just attempt to import
readline, and if that fails, fall back to a non-completion flavor.
This ensures that we'll be ready for them. Previously there was a race
between us revealing the direct hints to the peer, and us setting the
transit key (thus allowing us to check inbound handshake requests). The
Transit instance didn't handle the race, causing errors to be thrown
when the other side connected quickly.
This ensures that we'll be ready for them. Previously there was a race
between us revealing the direct hints to the peer, and us setting the
transit key (thus allowing us to check inbound handshake requests). The
Transit instance handles this race (with an interlock on the transit
key), but it's still nicer to do it cleanly.
This exposed a new race in Transit, where the inbound connection would
complete before transit.connect() had been called. The previous commit
adds an interlock to wait for that too. Until this change, the transit
key lock was covering that one up.