The windows-specific bugfix we needed is now in the current Tox release,
so we no longer need pre-release code. Anyways the upstream git repo
moved (from bitbucket to github), so the install-from-bitbucket was
failing.
I think somebody was port-scanning the server (or pointed some
non-wormhole client at it), and caused some exceptions in the logs.
These are still bad handshakes, but should be logged normally instead of
throwing exceptions.
There are still some rough edges, but I think I'll file a new ticket to
cover them. Note that the CLI syntax may change before this gets into a
release.
- move to 'wormhole ssh' group with accept/invite subcommands
- change names of methods
- check for permissions
- use --user option (instead of --auth-file)
- move implementation to cmd_ssh.py
- if multiple public-keys, ask user
Some of us can never remember the old ditty:
i before e, except after c
or when sounding like "a"
as in neighbor or weigh.
Perhaps magic wormhole can coddle us in our misorthography :)
So instead of "wormhole --verify send", use "wormhole send --verify".
The full set of arguments that were moved down:
* --code-length=
* --verify
* --hide-progress
* --no-listen
* --tor
The following remain as top-level arguments (which should appear after
"wormhole" and before the subcommand):
* --relay-url=
* --transit-helper=
* --dump-timing=
* --version
The values set by the base Config constructor could mask Click parsers
that weren't supplying defaults properly, or which were using different
defaults.
When tests need a Config object, they now call a function which invokes
Click with a mocked-out go() function, and grabs the Config object
before actually doing anything with it.
With this, both clients and servers will send a PING at least once every
minute, and will drop connections that haven't seen any traffic for 10
minutes.
This should help keep NAT table entries alive, and will drop connections
that are no longer viable because their NAT entries have expired.
closes#60
This should fix the coverage data's filenames: previously they were like
".tox/coverage/lib/python2.7/site-packages/wormhole/foo.py", now they
should be "src/wormhole/foo.py".
nevermind.. it appears that travis's pypy (2.6-ish) is too old for
PyNaCl to work, and their pypy3 (3.0-3.2ish) is too. Revisit this when
their images get updated.
Without this, the sender drops the connection before the "close" message
has made it to the server, which leaves the mailbox hanging until it
expires. It still lives in a 'd.addBoth()' slot, so it gets closed even
if some error occurrs, but we wait for it's Deferred to fire in both
success and failure cases.
We already hard-code 'relay.sqlite', so I don't see a lot of value in
making making the stats file configurable too. That said, if it makes
life easier for packagers (e.g. start-stop-daemon or systemd wanting
these files to go into /var/run/something/ , and if it isn't sufficient
to just use /var/run/something/ as the CWD), I'd accept a patch to
add it back.
This changes the DB schema and rewrites the expiration/pruning
algorithm. The previous code had several bugs which failed to clean up
nameplates, mailboxes, and messages when clients didn't explicitly close
them before disconnecting (and sometimes even when they did).
The new code runs an expiration loop every 10 minutes, and prunes
anything that is more than 11 minutes old. Clients with a connected
listener (a websocket that has open()ed a Mailbox) update the "in-use"
timestamp at the beginning of the loop. As a result, nameplates and
mailboxes should be pruned within between 10 and 20 minutes after the
last activity.
Clients are expected to reconnect within 9 minutes of a connection being
lost. The current release does not survive a dropped connection, but a
future one will. By raising the 10min/11min settings for specific
applications, a "Persistent Wormhole" mode will tolerate even longer
periods of being offline.
The server now has an option to write stats to a file at the end of each
expiration loop, and the munin plugins have been rewritten to use this
file instead of reading the database directly. The file includes a
timestamp so the plugins can avoid using stale data.
When this new server is run against an old (v2) database, it should
automatically upgrade it to the new v3 schema. The nameplate and mailbox
tables will be erased, but the usage (history) will be preserved.
The DB queries this uses aren't particularly efficient, and when the
time it takes to run starts to become a problem, we should do an
optimization pass.