Linux defaults to a soft limit of 1024, which limits us to 512 simultaneous
non-transit-using connections. The transit relay runs in the same process, so
long-running relayed transfers will compete for those sockets too.
This raises the soft limit to equal the hard limit (if possible), or as much
as we can manage, if the soft limit was less than 10k. If the
resource.setrlimit calls aren't available (e.g. windows), or some other error
happens, this will log a message and continue without changing the limits.
closes#238
The Mailbox object throws CrowdedError, but WebSocketRendezvous wasn't
handling it specifically. The server responded by dropping the connection and
logging an "Unhandled Error", so the client would reconnect and then get the
same error again and again.
This changes WebSocketRendezvous to handle CrowdedError by sending a
"crowded" error response. The client should react to this by giving up on the
connection entirely, and not reconnecting.
We only log the internal (sqlite) ID of the nameplate, not the actual
small-integer name. While investigating misbehavior due to overload, I was
confused into thinking that users were getting nameplates in the 15000+
range, when in fact those were merely the internal database row ids.
This now shares the _compose() decorator with wormhole.cli.cli, and removes
the arguments_to_config() function in favor of just copying all kwargs into
the Config object.
at least by the same side. This forces the contour of claims (by any given
side) to be strictly unclaimed -> claimed -> released. The "claim"
action (unclaimed -> claimed) is idempotent and can be repeated arbitrarily,
as long as they happen on separate websocket connections. Likewise for the
"release" action (unclaimed -> released). But once a side releases a
nameplate, it should never roll so far back that it tries to claim it again,
especially because the first claim causes a mailbox to be allocated, and if
we manage to allocate two different mailboxes for a single nameplate, then
we've thrown idempotency out the window.
and make it possible to call release() even though you haven't called claim()
on that particular socket (releasing a claim that was made on some previous
websocket).
This should enable reconnecting clients, as well as intermittently-connected
"offline" clients.
refs #118
there was a function to "abbreviate" sizes, but it was somewhat
unclear and incomplete. reuse the sizeof_fmt_* set of functions from
the borg backup project (MIT licensed) to implement a more complete
and flexible display that will scale up to the Yottabyte and
beyond. it also supports non-IEC units (like "kibibyte", AKA 1024
bytes) if you fancy that stuff.
this is a workaround for #91: it allows users to better see the size
of the file that will be transfered.
*some* places are still kept in bytes, most notably when receive fails
to receive all bytes ("got %d bytes, wanted %d") because we may want
more clarity there.
text transfers also use the "bytes" suffix (instead of "B") because it
will commonly not reach beyond the KiB range.
note that the test suite only covers decimal (non-IEC) prefix, but it
is assumed to be sufficient to be considered correct.
I think somebody was port-scanning the server (or pointed some
non-wormhole client at it), and caused some exceptions in the logs.
These are still bad handshakes, but should be logged normally instead of
throwing exceptions.
With this, both clients and servers will send a PING at least once every
minute, and will drop connections that haven't seen any traffic for 10
minutes.
This should help keep NAT table entries alive, and will drop connections
that are no longer viable because their NAT entries have expired.
closes#60
We already hard-code 'relay.sqlite', so I don't see a lot of value in
making making the stats file configurable too. That said, if it makes
life easier for packagers (e.g. start-stop-daemon or systemd wanting
these files to go into /var/run/something/ , and if it isn't sufficient
to just use /var/run/something/ as the CWD), I'd accept a patch to
add it back.
The DB queries this uses aren't particularly efficient, and when the
time it takes to run starts to become a problem, we should do an
optimization pass.
This counts the number of "standalone" mailboxes we create, which
happens when a client does open() without first using a nameplate. The
current client doesn't do this, but future clients might.
This moves responsibility for the periodic prune-everything Timer up to
RelayServer too. That way we can be sure the stats are dumped
immediately after prune, and we can incorporate stats from Transit as
well.