-
This requires storing the clientid and username in the Publish object.
-
This makes much more sense than returning the amount of messages sent all the way up the call stack.
-
It merely drops packets when they exceed it. The specs are unclear about whether you're supposed to delay transmission until the quota is non-negative again. I decided against it because of increased complexity, and because on a continously overloaded client, this makes no sense. Effectively, this formalizes the 'max qos pending' mechanism that was already in place. It also includes PUBACK/PUBREL/PUBCOMP error handling, because that needed to be done for proper quota control.
-
I didn't count the seconds it was already waiting. It does now.
-
This required a special type WillPublish to make this easier and more logical.
-
The loop of pending messages was stuck. We saved a session expiry interval of 0. I may need to change that session copying idea.
-
This prevents bugs because the calling context forgets it. A (small) downside is that I have to make the Publish argument non-const. But, that's exactly what it is then, so...
-
But, this is a safe point before I will refactor it. I will remove the appStartTime and session last touched. With the new queued removals, this is no longer necessary.
-
I'm simplying/merging the rec, comp and rel packets, but I'm not sure it will work. Committing as a safe point. Later: I got it done as planned. Testing qos > 0 and mqtt5 still needs to be done more.
-
And one line about correlation data that was a bug.
-
The behavior for MQTT3 clients in the same, but I replaced the term 'clean session' and described the behavior in MQTT5 terms, of 'clean start' and an expiry interval.
-
Most of it is limits we already implemented non-standard compliant.
-
This is a preparation for MQTT5, because when there are receivers and publishers with different protocols, you can't always just write out the same packet. You can sometimes though, so that's what the copy factory determines.
-
This is an (insignificant) amount slower, but otherwise existing sessions won't get the new limits when reloading the config.
-
Because that's what's it is now. A lot of code can be refactored to get the settings from this now, but I'm not going to do that yet.
-
This entails making copies of the original packet when necessary, because QoS 0 doesn't have a packet id. I tried to keep it to an absolute minimum and do some precarious optmizations for it. There are tests though.
-
It caused typical global variable issues, showing in the retained messages recursive alghorithm breaking, because the referenced subtopics changed half way (see previous commit of the test for it). I need to perform some benchmarks to see if I need to devise an alternative.
-
The only mutable session data of a client is QoS related, so when we're copying sessions (for saving them), we need to lock the QoS data, because that gets modified from active client traffic in worker threads. Note: not super well tested at this point, nor was I ever able to trigger actual errors despite long stress testing, so it's a theoretical fix.
-
This method incurs no extra CPU load when messages aren't dropped.
-
One was confirmed: writing an mqtt packet into a client that disconnected after checking the weak pointer for validity. The rest made sense to change as well.
-
There were bugs in which authentication object was used when, causing threadings bugs. Instead of getting from the 'sender', we can just store a thread local pointer.
-
Files are simple serialized bytes prefaced by lengths. File is hashed to verify integrity. This was also a good way preventing unexpected errors when trying to crash the parser by having it load a different file. This change includes some refactoring that was necessary: - It 'fixes' looking at the wrong thread's authentiction. This is still wrong though. It will be fixed by a thread local pointer in the next commit. - Deadlocks with yourself are handled in rwlockguard. - QoSPacketQueue is now a class. - Probably other tweaks.
-
Also include a few stats.
-
Also fixes not downgrading QoS on subscribe.