Anyone sending 4GB in a single `transport.write()` is in for a surprise, but
at least we'll surprise them with an assertion *before* spending the time and
memory encrypting that monster.
I think I just managed to forget that inbound_close requires we respond with
a close ourselves. Also outbound open means we must add the subchannel to the
inbound table, so we can receive any data on it at all.
I'm sure I had a good reason for avoiding integers, but it makes logging and
testing more difficult, and both sides are using integers to generate them
anyways (so one side can pick the odd ones, and the other can pick the even
ones).
Python3.4 is no longer maintained (PEP 429 says March 2019 was the EOL date),
and the current version of pip (19.1) is the last to support it, so it's time
to let go.
* remove py34 from .travis.yml so PRs don't fail spuriously (there's some
problem with txtorcon and twisted that only happens on py3.4)
Without this, the Follower would see data for subchannel 0 before it had a
chance to create the SubChannel object that could accept it. We already have
a mechanism for inbound data to be queued inside the SubChannel until the
endpoint has had a chance to create the Protocol object: we rely on that
mechanism here. We just need to create the SubChannel before telling the
Manager to start, even though we don't reveal the SubChannel to the
caller (via the control endpoint) until the connection is known to succeed.
This helps a manual test get data from one side to the other without throwing
exceptions.
When the follower's connection is accepted, they'll observe a single
dataReceived chunk containing both the leader's KCM and the leader's first
actual data record. The state machine considers the KCM for an eventual-turn
before selecting the connection, so the data record will arrive while the
connection isn't quite ready for it (if consider() were immediate, this
wouldn't be a problem, but Automat doesn't deal with reentrant calls very
well). So we queue any records that arrive before we're selected.
If we had multiple potential connections, the act of cancelling the losing
ones was putting an error into log.err(), which flunked the tests. This
happened to appear on windows because the appveyor environment has different
interfaces than travis hosts.
Builds are failing on appveyor because it's trying to install `noiseprotocol`
all the time, when it's only installable under py3.
Thanks to @julian and julian/jsonschema 's .appveyor for the hints.
Add an integration test which exercises a full w.dilate connection and the
control endpoint.
Still untested:
* reconnecting after the initial TCP connection is lost
* resending data that wasn't acked before the connection was lost
* (re)sending data that was submitted while no connection was available
* the connect- and listen- endpoints
Dilator.stop() now shuts everything down, and returns a Deferred when it all
stops moving. This needed some Manager state machine changes (to notify
Dilator when it enters the STOPPED state). This also revealed problems in the
delivery of connector_connection_made() (which was misnamed) and
connector_connection_lost() (which wasn't being called at all).
This is only used by tests so far (and will simplify the integration test
that hasn't landed yet), but is not yet wired up to Boss, so there's no way
for applications to enable it yet.
I mistakenly believed that Noise handshakes are simultaneous. In fact, the
Responder waits until it sees the Initiator's handshake before sending its
own. I had to update the Connection state machines to work this way (the
Record machine now has set_role_leader and set_role_follower), and update the
tests to match.
For debugging I added a `_role` property to Record, but it should probably be
removed.