A new-local-candidate signal could be emitted from either the main thread (for local host candidates) or from the 'streaming' thread, if the nice_agent_attach_recv is attached on a different GMainContext than the main thread, and the local candidate is a server reflexive candidate. Libnice would need to do the g_signal_emitv from the main context of the agent whenever this happens to ensure all signals are always sent from the same thread.
I tried to do that, but I ran into complications, I just can't remember what they were. I think if you run different parts of libnice in different threads, then the rest of your code must be thread safe... That said, with the new signal-after-unlock, it would be quite easy to defer the signals to a different thread.
(In reply to comment #1) > I tried to do that, but I ran into complications, I just can't remember what > they were. I think if you run different parts of libnice in different > threads, then the rest of your code must be thread safe... That said, with > the new signal-after-unlock, it would be quite easy to defer the signals to > a different thread. Yeah, I was thinking of doing a if !g_main_context_is_owner, g_main_context_invoke should be fairly easy to achieve. If you can remember what issues you ran into, let me know.
I've implemented it in : https://github.com/kakaroto/libnice/commit/970aad4f58aca96a22946916fee7d89bb9483d76 For some reason though the test-io-stream-closing-read unit test fails, so I disabled the change in the code. I'm unable to figure out the cause for that unit test to fail.
Had any chance to figure out why it was breaking the unit tests ?
Nope, I never got time to look into this any more, as I'm very busy lately (and will be for a while). If you get the time for it, I'd appreciate it if you debugged it instead (if not, it will just have to wait).
Migrated to Phabricator: http://phabricator.freedesktop.org/T103
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.