The following code in agent_recv_message_unlocked is executed repeatedly for all incoming traffic: nice_debug ("%s: Received %d valid messages of length %" G_GSIZE_FORMAT " from base socket %p.", G_STRFUNC, retval, message->length, nicesock) In my profiler, I see vsprintf becoming a visible part of the CPU usage of an RTP application due to this nice_debug. I know nice_debug can be no-op'd through NDEBUG (via --disable-asserts), but it's common to still want debugging prints in the more complex parts while allowing the "main loop" to run fast. What do you think?
This is a bit hard to do because of the way GLog works, it does the g_strdup_printf() even if nothing is printing the message.
Wait, I might've misread the code. Since nice_debug calls glogv only when nice_debug is TRUE, what I probably want is to disable nice_debug. nice_agent_class_init -> nice_debug_init /* Init debug options depending on env variables */ In nice_debug_init, you seem to always call nice_debug_enable (TRUE). Shouldn't this be conditional on g_getenv ("NICE_DEBUG")?
(In reply to Ilya Konstantinov from comment #2) > Wait, I might've misread the code. Since nice_debug calls glogv only when > nice_debug is TRUE, what I probably want is to disable nice_debug. > > nice_agent_class_init > -> nice_debug_init /* Init debug options depending on env variables */ > > In nice_debug_init, you seem to always call nice_debug_enable (TRUE). > Shouldn't this be conditional on g_getenv ("NICE_DEBUG")? nice_debug_init internally checks for the env condition, see : flags_string = g_getenv ("NICE_DEBUG"); gflags_string = g_getenv ("G_MESSAGES_DEBUG");
(In reply to Rohan Garg from comment #3) > nice_debug_init internally checks for the env condition, see : > flags_string = g_getenv ("NICE_DEBUG"); > gflags_string = g_getenv ("G_MESSAGES_DEBUG"); It's true that it parses flags, BUT nice_debug_enable (TRUE) is called unconditionally, and this is what allows g_logv to be called.
Ahhh, sorry about that, I misread your comment, you're absolutely right, I'll have a look at this tonight!
The relevant code: void nice_debug_init (void) { ... flags_string = g_getenv ("NICE_DEBUG"); ... if (flags_string) flags = g_parse_debug_string (flags_string, keys, 4); ... nice_debug_enable (TRUE); // <---- .... } void nice_debug_enable (gboolean with_stun) { nice_debug_init (); // <---- debug_enabled = 1; ... } void nice_debug (const char *fmt, ...) { ... if (debug_enabled) { ... g_logv (G_LOG_DOMAIN, G_LOG_LEVEL_DEBUG, fmt, ap); ... } } The part where nice_debug_enable calls nice_debug_init makes me suspect that the opposite case (nice_debug_init unconditionally calling nice_debug_enable) wasn't planned to be this way originally.
Originally, g_debug() (aka g_log(G_LOG_LEVEL_DEBUG, ...) would always print to the screen, so we added NICE_DEBUG. However, in recent GLib version, there is G_MESSAGES_DEBUG to select which categories to print. You can also change G_MESSAGES_DEBUG at runtime, which is really useful to long running applications. So my goal was to slowly deprecate NICE_DEBUG and make G_MESSAGES_DEBUG do the same thing. The annoying bit is that inside the g_log() implementation, it does the g_strdup_printf() before passing the content to the handler which checks the env var.
(In reply to Olivier Crête from comment #7) > You can also change G_MESSAGES_DEBUG at runtime, which is really useful to long running applications. So if you change G_MESSAGES_DEBUG at runtime, you can also call nice_debug_enable at runtime, right?
(In reply to Ilya Konstantinov from comment #8) > So if you change G_MESSAGES_DEBUG at runtime, you can also call > nice_debug_enable at runtime, right? Ideally not, as I'd like to avoid forcing the "larger" application to have libnice specific code.
Isn't G_MESSAGES_DEBUG libnice-specific anyway? How do you envision it? I usually imagine someone wants to debug libnice specifically - suspecting there's something wrong in it. As much as he knows to set an environment variable, he can also call a function (from within gdb or from within his code). In any case, I just want vsprintf not to be called in the network I/O loop :) all the while maintaining our (OpenWebRTC) ability to debug libnice occasionally (so no ./configure --disable-assert). If you just #ifdef out this message code, I'll also be fine with it.
G_MESSAGES_DEBUG is from GLib, can be used in any glib based program, I've found it very useful to have upper layer messages mixed with the libnice messages to debug.
So you're saying that debug will remain enabled initially. If so, I guess we can add nice_debug_disable in our code, so it'll be disabled until really needed. The vsprintf "attracts attention" in the profiler during regular(!) good(!) ongoing network I/O. We must handle it.
(In reply to Olivier Crête from comment #7) > G_MESSAGES_DEBUG do the same thing. The annoying bit is that inside the > g_log() implementation, it does the g_strdup_printf() before passing the > content to the handler which checks the env var. Ilya's profiling shows that the thing contributing to slowness is the vsprintf and not the strdup, so perhaps we should worry about that only if it turns out to be a bottleneck too? (aka, someone complains about it :) )
Migrated to Phabricator: http://phabricator.freedesktop.org/T111
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.