-
When SSL wants the app to retry, it stands to reason that the local write buffer needs to be grown (read: reallocated) to hold the application bytes. At that point, the buffer is no longer where it was originally. This fixes segfaults.
-
I've see various variations, like mqtt and mqttv31. The 3.1.1 specs are clear what it should be, but the 3.1 specs aren't. So, allowing anything with mqtt in it.
-
Now that parsing and handling of packets is separated, we can use the main code to parse packets in the new FlashMQTestClient. This allows great flexibility in inspecting the server response in a flexible manner. We now also have the ability to make tests for MQTT5 features.
-
Will be used for the test client I have in mind.
-
Will be used for the test client I have in mind.
-
This is necessary for the test client I have in mind, so I can re-use this code in that new test client which has no MQTT behavior, but just returns packets (meaning I have to be able to parse them without initiating handling).
-
This is necessary for the test client I have in mind, so I can re-use this code in that new test client which has no MQTT behavior, but just returns packets (meaning I have to be able to parse them without initiating handling).
-
And also to future parse... methods. This is necessary for the upcoming new test client.
-
This needed a separation: getting the current thread, and getting the thread of the client you're queueing a command for. This also resolves a circular reference between Client and ThreadData.
-
Instead of the thread data, which didn't make sense.
-
This required adding a global stats object. It also contains a bit of refactor to make a type out of the derived counters.
-
This makes much more sense than returning the amount of messages sent all the way up the call stack.
-
They don't need object state.
-
It tested the wrong thing. Tests still pass.
-
This fixes clients being disconnected after reducing the max value and reloading the settings.
-
The code allowed one to try to upgrade a read lock to a write lock, which would/could fail. In the existing code, the only attempted double lock was a write lock, so this change doesn't actually fix anything, but it's clearer, and we can make do with the lock that was removed. It just takes a bit more overhead on app start-up, to place a lock per message.
-
It was reversed...
-
Check events are placed in a sorted map based on the last activity and keep-alive interval of the client. This makes it more accurate and reduces system load because it saves unnecessary checking.
-
This fixes wrong ones on various distros.
-
This is required to be able to put them all in the repo with reprepro.
-
I don't understand it still works on Travis CI though.
-
There's no point in keeping a vector per nanosecond.
-
There's no point in keeping a vector of removals per nanosecond.
-
This is theoretical, because I never called addCallback() after having started the timer.
-
This is fast(er).
-
This is way faster.
-
This should fix corruption. There were inexplicable stalls of timed events, and the debugger showed the names were all corrupted. std::sort can do that sort of thing, apparently.