-
Start readstage threads by getting a frame from the datasource first, rather than starting with a null item, and getting a frame in run. This is slightly more awkward at startup, but is more consistent with how other single thread stages work, and we don't end up queueing unnecessary threads (since we only start a thread if we have a frame). We still do reads in the first stage, but they are for the next frame, not the one we are currently working on. Reset SequencingBuffers as part of resetting the processing stages after a stream::project ends. This is of course necessary since sequencing buffers wait to get specific frames before reporting new frames as available. Use stage numbers directly as priorities for new jobs since qt 5.1 fixed the bug where priorities were backwards Surface the active frame count as a parameter of stream, and lower the default from 500 to 100, though that may still be unnecessarily high for alot of cases. Add some debug methods, improve some comments.
-
Changes to variable handling for classification/regression/clustering
-
Conflicts: openbr/plugins/algorithms.cpp
-
Changes to Pipe::train
-
In the train loop, consider timeVarying in addition to trainable. When setting up partial projects, stop at timeVarying stages in addition to trainable stages. Call projectUpdate (with the full training set) on timeVarying stages.
-
Resolved conflicts: app/br/br.cpp openbr/core/bee.cpp openbr/plugins/output.cpp
-
A way to avoid deadlocks when using stream inside a distribute transform
-
Since we no longer put Stream threads on the global thread pool, the previous changes are unnecessary. Also, in distribute, set up the QFutureSyncrhonizer with futures in the opposite order. In Qt 5.1, the waiting thread will wait for each future in the order they are added to the synchronizer, and thread execution will proceed in the same order. This prevents the waiting thread from every doing anything besides waiting. Reversing the QFuture order (so that it is the opposite of execution order) allows the waiting thread to steal work as intended
-
Don't use the global thread pool for streams. Micro-managing the global thread pool's active thread count has proven infeasible, therefore in order to avoid thread based deadlocks, we don't use the global thread pool. Instead, we share thread pools across sibling Stream transforms. Misc. code cleanup and better last frame detection.