GDB, Linux

Fast Tracing with GDB

Even though GDB is a traditional debugger, it provides support for dynamic fast user-space tracing. Tracing in simple terms is super fast data logging from a target application or the kernel. The data is usually a superset of what a user would normally want from debugging but cannot get because of the debugger overhead. The traditional debugging approach can indeed alter the correctness of the application output or alter its behavior. Thus, the need for tracing arises. GDB in fact is one of the first projects which tried to have an integrated approach of debugging and tracing using the same tool. It has been designed in a manner such that sufficient decoupling is maintained – allowing it to expand and be flexible. An example is the use of In-Process Agent (IPA) which is crucial to fast tracing in GDB but is not necessary for TRAP-based normal tracepoints.

GDB’s Tracing Infrastructure

The tracing is performed by trace and collect commands. The location where the user wants to collect some data is called a tracepoint. It is just a special type of breakpoint without support of running GDB commands on a tracepoint hit. As the program progresses, and passes the tracepoint, data (such as register values, memory values etc) gets collected based on certain conditions (if desired so). The data collection is done in a trace buffer when the tracepoint is hit. Later, that data can be examined from the collected trace snapshot using tfind. However, tracing for now is restricted to remote targets (such as gdbserver). Apart from this type of dynamic tracing, there is also support for static tracepoints in which instrumentation points known as markers are embedded in the target and can be activated or deactivated.  The process of installing these static tracepoints is known as probing a marker. Considering that you have started GDB and your binary is loaded, a sample trace session may look something like this :


(gdb) trace foo
(gdb) actions
Enter actions for tracepoint #1, one per line.
> collect $regs,$locals
> while-stepping 9
> collect $regs
> end
> end
(gdb) tstart
[program executes/continues...]
(gdb) tstop

This puts up a tracepoint at foo, collects all register values at tracepoint hit, and then for subsequent 9 instruction executions, collects all register values. We can now analyze the data using tfind or tdump.


(gdb) tfind 0
Found trace frame 0, tracepoint 1
54    bar    = (bar & 0xFF000000) >> 24;

(gdb) tdump
Data collected at tracepoint 1, trace frame 0:
rax    0x2000028 33554472
rbx    0x0 0
rcx    0x33402e76b0 220120118960
rdx    0x1 1
rsi    0x33405bca30 220123089456
rdi    0x2000028 33554472
rbp    0x7fffffffdca0 0x7fffffffdca0
rsp    0x7fffffffdca0 0x7fffffffdca0
.
.
rip    0x4006f1 0x4006f1 <foo+7>
[and so on...]

(gdb) tfind 4
Found trace frame 4, tracepoint 1
0x0000000000400700 55    r1 = (bar & 0x00F00000) >> 20;

(gdb) p $rip
$1 = (void (*)()) 0x400700 <foo+22>

So one can observe data collected from different trace frames in this manner and even output to a separate file if required. Going more in depth to know how tracing works, lets see the GDB’s two tracing mechanisms :

Normal Tracepoints

These type of tracepoints are the basic default tracepoints. The idea of their use is similar to breakpoints where GDB replaces the target instruction with a TRAP or any other exception causing instruction. On x86, this can usually be an int 3 which has a special single byte instruction – 0xCC reserved for it. Replacing a target instruction with this 1 byte ensures that the normal instructions are not corrupted. So, during the execution of the process, the OS hits the int 3 where it halts and program state is saved. The OS sends a SIGTRAP signal to the process. As GDB is attached or is running the process, it receives a SIGCHLD as a notification, that something happened with a child. It does a wait(), which will tell it that process has received a SIGTRAP. Thus the SIGTRAP never reaches the process as GDB intercepts it. The original instruction is first restored, or executed out-of-line for non-stop multi-threaded debugging. GDB transfers the control to the trace collection which does the data collection part upon evaluating any condition set. The data is stored into a trace buffer. Then, the original instruction is replaced again with the tracepoint and normal execution continues. This all fine and good, but there is a catch – the TRAP mechanism alters the flow of the application and the control is passed to the OS which leads to some delay a compromise in speed. But even with that, because of a very restrictive conditional tracing design, and better interaction of interrupt-driven approaches with instruction caches, normal interrupt- based tracing in GDB is a robust technique. A faster solution would indeed be a pure user-space approach, where everything is done at the application level.

Fast Tracepoints

Owing to the limitations stated above, a fast tracepoint approach was developed. This special type of tracepoint uses a dynamic tracing approach. Instead of using the breakpoint approach, GDB uses a mix of IPA and remote target (gdbserver) to replace the target instruction with a 5 byte jump to a special section in memory called a jump-pad. This jump-pad, also known as a trampoline, first pushes all registers to stack (saving the program state). Then, it calls the collector  function for trace data collection, it executes the displaced instruction out-of-line, and jumps back to the original instruction stream. I will probably write something more about how trampolines work and some techniques used in dynamic instrumentation frameworks like Dyninst in a subsequent post later on.

gdb-working

Fast tracepoints can be used with command ftrace, almost exactly like with the trace command. A special command in the following format is sent to the In-Process Agent by gdbserver as,

FastTrace:<tracepoint_object><jump_pad>

where <tracepoint object> is the object containing bytecode for conditional tracing, address, type, action etc. and <jump pad> is the 8-byte long head of the jump pad location in the memory. The IPA prepares all that and if all goes well, responds to such a  query by,

OK<target_address><jump_pad><fjump_size><fjump>

where <target address> is the address where the tracepoint is put in the inferior, <jump_pad> is the updated address of the jump pad head and the <fjump> and <fjump_size> are the jump instruction sequence and its size copied to the command buffer, sent back by IPA. The remote target (gdbserver) then modifies the memory of the target process. Much more fine grained information about fast tracepoint installation is available in the GDB documentation. This is a very pure user-space approach to tracing. However, there is a catch – the target instruction to be replaced should be at least 5 bytes long for this to work as the jump is itself 5 byte long (on Intel x86). This means that fast tracepoints using GDB cannot be put everywhere. How code is modified when patching a 5 bytes instruction is a discussion of another time. This is probably the fastest way yet to perform dynamic tracing and is a good candidate for such work.

The GDB traces can be saved with tsave and with the –ctf switch may be exported to CTF also. I have not tried this but hopefully it should at least open the traces with TraceCompass for further analysis. The GDB’s fast tracepoint mechanism is quite fast I must say – but in terms of usability, LTTng is a far better and advanced option. GDB however allows dynamic insertion of tracepoints and the tracing features are well integrated in your friendly neighborhood debugger. Happy Tracing!

Standard
Kernel, Linux

Ease your kernel tracing struggle with LTTng Addons

If you are new to Linux tracing and/or LTTng, go no further. Head on to the new and awesome LTTng Docs to know what this stuff is all about. I wrote an article on basics of LTTng and then followed it up with some more stuff a few month back too.

Ok, so now for those who have been using LTTng for sometime and especially, the kernel tracer, I am pretty sure, you must have faced a moment when you would have asked yourself – what if I could just modify those default tracepoints provided by LTTng and maybe add some more functionality to them. As an example, here is an interesting use-case I recently encountered –

Consider the netif_receive_skb event. A tracepoint is present in a function of the same name in the kernel. This function notifies the kernel whenever the data is received in the socket buffer for any of the net devices and when this event in enabled, the tracepoint is hit and skbaddr, data length and device name are recorded by default. For more info on how this works, refer to the TRACE_EVENT macro in the kernel. There are a few articles on LWN explaining how this all works. The LTTng tracepoint definitions are no alien to this mechanism. They basically act as a probe to hook onto these tracepoints in the kernel and are provided in the form of kernel modules. Refer to the internals of lttng-modules package and do have a look at what LTTng Docs have to say about this. Coming back to our use-case, so now consider that I want to just record a netif_receive_skb event only when its a localhost device (skb->dev->name == "lo"). Hmm, interesting.

Now, instead of forcing you to understand the deep internals of how the LTTng’s macros were magically working behind the scenes, Francis Giraldeau did some little sorcery and churned out…*drum roll*… lttng-addons! Checkout the addons branch in the repo. Apart from a massive help in running research experiments rapidly, it can be used for some practical scenarios too. Do have a look at the current modules available in that. I have added a new netif_receive_skb_filter event (provided in addons/lttng-skb-recv.c) to explain the use-case which we were discussing about previously. It can also act as a mini template for adding your own addon modules. Basically the flow is – create your module C file, make entry in Makefile, add the custom TRACE_EVENT entry in instrumentation/events/lttng-module/addons.h for your module, build and install modules, modprobe your new module, fire lttng-sessiond as root and then enable your custom event. Such happiness, much wow!

Once you have built the modules and installed them, restart lttng-sessiond as root and try to see if your newly created events are available:

$ lttng-list -k | grep netif
netif_receive_skb (loglevel: TRACE_EMERG (0)) (type: tracepoint)
netif_rx (loglevel: TRACE_EMERG (0)) (type: tracepoint)
netif_receive_skb_filter (loglevel: TRACE_EMERG (0)) (type: tracepoint)

Do the usual stuff next and have a look at the trace:

$ lttng create
$ lttng enable-event -k netif_receive_skb_filter
$ lttng start
$ ping -c2 localhost
$ ping -c2 suchakra.in
$ lttng stop
$ lttng view
[22:51:28.120319188] (+?.?????????) isengard netif_receive_skb_filter: { cpu_id = 3 }, { skbaddr = 18446612135363067648, len = 84, name = "lo" }
[22:51:28.120347949] (+0.000028761) isengard netif_receive_skb_filter: { cpu_id = 3 }, { skbaddr = 18446612137085857024, len = 84, name = "lo" }
[22:51:29.120071966] (+0.999724017) isengard netif_receive_skb_filter: { cpu_id = 3 }, { skbaddr = 18446612137085856768, len = 84, name = "lo" }
[22:51:29.120102320] (+0.000030354) isengard netif_receive_skb_filter: { cpu_id = 3 }, { skbaddr = 18446612137085857280, len = 84, name = "lo" }

And there you have it, your first filtered kernel trace with your first custom addon module. Happy tracing!

Standard