Tuesday, November 25, 2014

Quickly turn gentoo daily snapshot into docker image

Download the snapshot from gentoo mirror:

Then
$ bunzip2 stage3-amd64-20141120.tar.bz2 -c | docker import - gentoo-amd64
7dbd254474e511597f160342bf8d828406f52467a62061b29ea0b3009b806b05

That's it!

Wednesday, October 29, 2014

Write zookeeper client log using google glog

I don't like the fact every other piece of log in my program is happily written by google glog but zookeeper is messing stderr by writing the log there. Besides, I like the format of glog better so I made a quick change to ask zookeeper c clien to write log using glog, here is a quick patch:

diff --git a/third-party/zookeeper/src/c/include/zookeeper_log.h b/third-party/zookeeper/src/c/include/zookeeper_log.h
index e5917cb..6519587 100644
--- a/third-party/zookeeper/src/c/include/zookeeper_log.h
+++ b/third-party/zookeeper/src/c/include/zookeeper_log.h
@@ -28,6 +28,24 @@ extern "C" {
 extern ZOOAPI ZooLogLevel logLevel;
 #define LOGSTREAM getLogStream()

+#define ZOOKEEPER_GLOG
+#ifdef ZOOKEEPER_GLOG
+ZOOAPI void zk_glog_message(int curLevel, int line, const char* funcName,
+                            const char* message);
+/* We can't include glog/log_severity.h because it's in C++ style, so
+ * we hard code the corresponding log level here */
+#define LOG_ERROR(x) if(logLevel>=ZOO_LOG_LEVEL_ERROR) \
+    zk_glog_message(2, __LINE__, __FILE__, format_log_message x)
+#define LOG_WARN(x) if(logLevel>=ZOO_LOG_LEVEL_WARN) \
+    zk_glog_message(1, __LINE__, __FILE__, format_log_message x)
+#define LOG_INFO(x) if(logLevel>=ZOO_LOG_LEVEL_INFO) \
+    zk_glog_message(0, __LINE__, __FILE__, format_log_message x)
+#define LOG_DEBUG(x) if(logLevel==ZOO_LOG_LEVEL_DEBUG) \
+    zk_glog_message(0, __LINE__, __FILE__, format_log_message x)
+#else
+
 #define LOG_ERROR(x) if(logLevel>=ZOO_LOG_LEVEL_ERROR) \
     log_message(ZOO_LOG_LEVEL_ERROR,__LINE__,__func__,format_log_message x)
 #define LOG_WARN(x) if(logLevel>=ZOO_LOG_LEVEL_WARN) \
@@ -36,6 +54,7 @@ extern ZOOAPI ZooLogLevel logLevel;
     log_message(ZOO_LOG_LEVEL_INFO,__LINE__,__func__,format_log_message x)
 #define LOG_DEBUG(x) if(logLevel==ZOO_LOG_LEVEL_DEBUG) \
     log_message(ZOO_LOG_LEVEL_DEBUG,__LINE__,__func__,format_log_message x)
+#endif

 ZOOAPI void log_message(ZooLogLevel curLevel, int line,const char* funcName,
     const char* message);
diff --git a/third-party/zookeeper/src/c/src/zk_glog.cc b/third-party/zookeeper/src/c/src/zk_glog.cc
new file mode 100644
index 0000000..ad2bdb3
--- /dev/null
+++ b/third-party/zookeeper/src/c/src/zk_glog.cc
@@ -0,0 +1,10 @@
+#include "third-party/google/glog/logging.h"
+#include "zookeeper_log.h"
+
+extern "C" {
+void zk_glog_message(int curLevel, int line, const char* filename,
+                     const char* message) {
+  LogMessage(filename, line, curLevel).stream() << message;
+}
+}

Monday, June 16, 2014

Run X11 application inside docker without VNC or SSH

I'm so annoyed that people would think of using ssh forwarding or VNC to run a X11 application inside docker. Docker is just a special chroot environment so there must be a more efficient way to setup the communication tunnel required by X11, and here it is:

docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY

/tmp/.X11-unix contains the unix domain socket file used by your running X11 application, but mounting it inside docker, the X11 app inside docker will  happily access it.

Friday, January 24, 2014

Evaluate performance bottleneck with perf

Perf tool is a profiler tool for Linux kernel 2.6+ that uses performance counter to profile the performance bottleneck of a program in both userspace and kernel space.

This tutorial is good example on how to use it.

To install it on ubuntu, sudo apt-get install linux-tools

A quick example on how to find the bottleneck of a program.


  1. Run the program as usual to collect performance record, for example "
    perf record iperf -c 192.168.1.1 -d", This command will ask perf to execute command "iperf -c 192.168.1.1 -d" and collect the performance number in "perf.data" file.
  2. View the performance record: perf record, this will show you how much time is spent in a userspace function or kernel function. In my example, it looks like this:
     31.95%  iperf  [kernel.kallsyms]        [k] md5_transform                                                                                                                        
     16.99%  iperf  [aesni_intel]            [k] _aesni_enc1                                                                                                                          
      3.14%  iperf  [kernel.kallsyms]        [k] do_csum                                                                                                                              
      2.50%  iperf  [kernel.kallsyms]        [k] memcpy                                                                                                                               
      2.16%  iperf  [kernel.kallsyms]        [k] __ticket_spin_lock                                                                                                                   
      1.78%  iperf  [aesni_intel]            [k] _aesni_dec4                                                                                                                          
      1.10%  iperf  [kernel.kallsyms]        [k] nf_iterate                                                                                                                           
      0.97%  iperf  [nf_conntrack]           [k] ____nf_conntrack_find                                                                                                                
      0.91%  iperf  [kernel.kallsyms]        [k] __slab_free                                                                                                                          
      0.81%  iperf  [kernel.kallsyms]        [k] skb_release_data                                                                                                                     
      0.80%  iperf  [kernel.kallsyms]        [k] fib_table_lookup                                                                                                                     
      0.80%  iperf  [kernel.kallsyms]        [k] memset                                                                                                                               
      0.80%  iperf  [kernel.kallsyms]        [k] __copy_user_nocache                                                                                                                  
      0.77%  iperf  [cxgb4]                  [k] process_responses                                                                                                                    
      0.77%  iperf  [ip_tables]              [k] ipt_do_table                                                                                                                         
      0.69%  iperf  [nf_conntrack]           [k] __nf_conntrack_find_get                                                                                                              
      0.68%  iperf  [kernel.kallsyms]        [k] md5_update                                                                                                                           
      0.67%  iperf  [nf_conntrack]           [k] nf_conntrack_in                                                                                                                      
      0.56%  iperf  [nf_conntrack]           [k] hash_conntrack_raw                                                                                                                   
      0.52%  iperf  [kernel.kallsyms]        [k] dst_release                         
    

    This is because I'm running tcp benchmark using iperf with ipsec turned on in ESP and AH mode, so a lot of cpu time is spent on md5(for checksum) and aes(for encryption). Now that we know where the performance bottlenecks are, it's time to spend time to fix them.