Category Archives: Uncategorized

How to Decompile / Reverse Engineer PyInstaller Binaries

Sharing Python programs is a pain, and one solution to this problem is to package up all of the python code, the interpreter and the dependencies into an executable for distribution. PyInstaller is a popular program to perform this task. But what if someone hands you a compiled binary and you want to read the source code?  Although, PyInstaller has a well described format and a tool, that enable you to start decompiling the programs, it is much better to use another tool that was written specifically for decompilation, namely pyinstxtractor.py.  With this tool in hand it’s a simple: And voila! You can read the previously obfuscated source (and find what they were ashamed of).  

.NET Bio is Significantly Faster on .Net Core 2.0

Summary: With the release of .NET Core 2.0, .NET Bio is able to run significantly faster (~2X) on Mac OSX due to better compilation and memory mangement.

The .NET Bio library contains libraries for genomic data processing tasks like parsing, alignment, etc. that are too computationally intense to be undertaken with interpreted languages like Python or R, but can be efficiently performed in code that is typed and compiled, like Java and C#, without suffering the productivity burden of coding in C/C++.

Historically, .NET Bio ran on two different frameworks.  On windows one could leverage all the substantial engineering Microsoft invested in their CLR to create fast and stable programs.  On Mac or Linux, one could use the independently implemented Mono Framework, which was a solid effort by a significantly less resourced team that performed well, but never really came close to the quality of the Microsoft implementation.  In practice, as bioinformaticians are weened on the Linux/GCC toolchain and infrastructure, Mono was the frequently used option.  This led to performance and reliability issues.

All of this changed when Microsoft decided to open source .NET , releasing a version of their framework for Mac, Linux and Windows called .NET Core.  This was a watershed moment, as it allows open source writers to take advantage of all the advanced memory management and compilation techniques Microsoft developed.  Better Garbage collection, advanced pre-compilation and vector instructions are all allowing higher level languages like C# to perform nearly as well as aggressively optimized C/C++ for many practical applications.

To test out the new possibilites, I compared the time it takes to parse all the reads in a 1.2 GB BAM file using either the old Mono runtime or the new dotnet core runtime on my MacBook computer.  The test code was simple counting of reads after they were deserialized from the compressed format and converted into objects.  

And the test case was timed on each platform as follows, the times were taken in triplicate and random order and the results are shown below.  I also benchmarked an equivalent task using the industry standard samtools available from htslib, a C library for the same task that uses tons of bit operations, and is about as performant as one can get 1)This is for methods that fully parse the BAM read, PySAM, which lazily decodes the data, can finish a simple counting task in about 15 seconds, but this isn’t representative of a workflow which would require full decoding..

Profiling Each Platform

Mono: The Mono platform allows us to easily profile the code using instruments on Mac OSX.  Examining the profile of the running code, we can see where the bottlenecks are.  A portion of time is spent in libz doing decompression of the underlying gzipped data, but it appears that much is also spent on garbage collection associated events, e.g. preparing for memory allocation with mono_handle_new.

.NET Core 2.0:  Being newly available on multiple platforms, there is no good way to profile .NET Core 2.0 on Mac OSX.  Although typical profilers cannot understand the C# JITted code, we can look at the compiled C/C++ code and here we see that .NET is spending twice as much time in libz as a percentage than Mono is.  Since much of this task is simply decompressing data, more time in the decompression library means the rest of the code is running faster, and is a good sign.   

To further profile the managed code, I wrote a custom MacOS Profiler that changed how each method was compiled to insert a hook that allowed me to track the enter and exit time of each function with a P/Invoke call to C++ code at the entrance and exit of each managed method frame, and so calculated the inclusive time spent in each method.  Although the addition of this instrumentation is expected to greatly skew the results, we saw that the majority of the managed code is spent parsing the data, and much of that is spent doing validations!

Method NameInclusive Time (s)Call Count
Bio.IO.BAM.BAMParser.GetAlignedSequence2.0123565255
Bio.Util.Helper.StringContainsIllegalCharacters0.670411487805
Bio.IO.SAM.SAMOptionalField.set_Tag0.551263711275
Bio.IO.SAM.SAMOptionalField.set_VType0.549131711275
Bio.IO.SAM.SAMOptionalField.IsValidTag0.547594711275
Bio.IO.SAM.SAMOptionalField.IsValidVType0.54615711275
Bio.IO.SAM.SAMOptionalField.set_Value0.543942711275
Bio.IO.SAM.SAMOptionalField.IsValidValue0.543379711275
SAMTools Continuing the trend that the more performant library spends more time in libz, samtools spends half of it’s time there, and the other half basically doing all the decompression operations to convert the binary data into usable data (in sam_format1). SAMTools is optimized C code with minimal error checking, so is a rough upper bound on the speed that could be obtained.

Conclusions

The C# code was written in a manner free of performance concerns and has tons of validation checks and unnecessary copies. Yet .NET core performed only 2X slower than high performance C code.  This is due to the fact that thanks to the engineering in .NET core, the code can now run twice as fast on Mac OSX as it used to.

Although C# is still slower than C, it is the clear winner here.  The SAMTools code bottlenecks at a method that although heavily optimized is horrifically difficult to read.  In contrast, the .NET code bottlenecks at a method that is filled with nearly gratuitous memory allocations, and a plethora of checks where useful exceptions could be thrown and meaningful error messages delivered if assumptions aren’t met.  Plus, it has all the advantages of memory safety and quality tooling we’ve come to expect in 2017.  It’s intriguing to ponder how the benchmark would look if the C# code had been written in a style as performance based as the C code, by avoiding unnecessary allocations, using pointers, or using the new ref returns available in C# 7.0. I’ve shown previously such changes s could significantly increase the parsers performance.

The .NET ecosystem is moving along well for meaningful scientific computing. Importantly, although time-spent C/C++ code will always be more performant, as .NET Core is now open source, we can embed any time-critical functionality in the runtime itself now, allowing us to have things like ludicrously performant suffix-arrays interacting with parsers.  .NET Core 2.0 is a big step forward.

References   [ + ]

1. This is for methods that fully parse the BAM read, PySAM, which lazily decodes the data, can finish a simple counting task in about 15 seconds, but this isn’t representative of a workflow which would require full decoding.

Why R Math Functions on Windows are Slow, and How to Fix It

R on windows has much slower versions of the log, sine and cosine functions than are available on other platforms, and this can be a serious performance bottleneck for programs which frequently call these math functions.  The reason for this is that the library R uses to obtain the log function on windows (libmingwex.a) contains a version of the log implementation which is out of date relative to more modern code and much slower than other available versions.  That the glibc implementations of common math functions are slow is a known issue that others have discussed on the internet.  This post uses the log function as an example to show specifically why it is slow, and then suggest some quick work arounds for these math functions.

The log Function on MinGW / Windows

Comparing the assembly code of the log function as generated with MinGW on windows with the assembly generated for the log function on Mac OSX shows why the code is slow. Below is the assembly code for log function as decompiled from MinGW (their implementation is basically a cut/paste from GNU libc). The  crucial feature here is that most instructions start with an “f” indicating that they are using the floating point registers, and there is one instruction the fyl2xp1 that is one of the most expensive operations out there. This instruction takes a log using hardware, and is known to be slower than most other possible ways to calculate log for modern machines. These floating point register instructions were state of the art way back when, but nowadays most hardware is using special XMM/YMM registers and instructions that can make for much faster code to calculate logarithms (See Section 17 for more information on this).

The log Function on Mac OSX

Demonstrating faster assembly code is the Mac OSX implementation of log shown below.  The key feature here is that it uses XMM registers and has a more modern and performant implementation.  Note that many other implementations (e.g. the Microsoft compiler) also use faster versions like this. That is, Windows is not slow, R is slow on Windows. It’s all the same Intel chips under the hood.

Solving the problem

Ideally we could just push a faster log implementation to the R core repo for Windows, but the fact of the matter is the R core team does not accept code changes that only improve performance (and fair play to them, R is the most cross platform software out their, and that wasn’t easy to do).  Also, realistically the current implementation is feasible for most peoples work. So the solution is to switch out the log function for a better one only when it matters, and if it matters that means most of the library is already written in compiled code. Where to get a cross-platform compatible log function? It seems the folks over at Julia ran into a similar problem with slow Log on different machines. They looked at two version in libm (e.g.  Libm Version #1 and Libm Version #2), and also some really crazy implementations such as this one available from Yepp. However, what they settled on is a very good function, and we can port those over directly into R.  An example is shown in the pull request here.    

Profiling Rcpp package code on Windows

Profiling Rcpp code on Unix/Mac is easy, but is difficult on Windows because R uses a compilation toolchain (MinGW) that produces files that are not understood by common Windows profiling programs.  Additionally, the R build process often removes symbols which allow profilers to produce sensible interpretations of their data. The following steps allow one to profile Rcpp code on windows.

Change compilation settings to add in symbol settings

A default R installation typically has certain compiler settings placed in the equivalent of the C:\Program Files\R\R-3.3.1\etc\x64\Makeconf that strips information needed for profiling during the Rcpp compilation process, in particular a line which reads:  DLLFLAGS=-s . To override this and add some additionally needed flags, one should add a folder and file to their home directory which overrides and appends necessesary compilation flags.  To a file located at a location equivalent to  C:\Users\YOURNAME\.R\Makevars on your machine (note the ‘.’ before R), add the following lines: You can verify this worked correctly by checking that -gdwarf-2 appears in the compilation messages, and that -s is missing in the final linker step.

Run a profiler which understands MinGW compiled code

The next key step is to run a profiler which can understand the Unix like symbols on windows.  Two free and good options are Very Sleepy and AMD’s code analyst (which also works on Intel chips).  Very Sleepy is very good at basic timings and providing stack traces, while AMD’s profiler is able to drill down to the assembly of a process. Both profilers are good but an example with AMD is shown below.
  1. Open the program and setup a quick session to start and run a sample R script that uses your code, such as in the example shown below.AMD_ProfilerSettings
  2. Next run the profiler and get ready to look at results.  For example, here I can see that half the time was spent in my code, versus half in the R core’s code (generating random numbers)ProfilerResults1And digging further down I can see at the assembly level what the biggest bottlenecks were in my code
assembly Its often helpful to look at the original source files in addition to the assembly, and this can be enabled by setting directory information by Tools-> CodeAnalyst Options -> Directories.  

C# vs. Java, Xamarin vs. Oracle, Performance Comparison version 2.0

Today I noticed the SIMD implementation of the Mandelbrot set algorithm I blogged about last year was successfully submitted to the language shootout webpage. However, I was a bit disappointed to see the C# version was still slower than the Java version, despite my use of the special SIMD instructions (though I was pleased that a quick port of my code to F# absolutely annihilates the OCAML version of this benchmark).

I had benchmarked my code as faster than the Java code on my Mac, the CentOS cluster at my work and two Ubuntu virtual machines (Mono >3.0, LLVM compiler). What gave?

Undeterred, I thought I would try to use SIMD to improve another C# benchmark, and took a shot at the N-body benchmark test. Once again, the version I wrote, in my hands, was much faster. But when submitted to the shoot-out and accepted, it was either much slower, or didn’t even run. My submission using the SSE instructions not only didn’t beat Java, it was actually slower than the original C# version!

Below are the timings on my top-of-the-line Mac for the two benchmarks, in both cases we see that the C# program runs in 80-90% of the time the Java Program takes. There are several key take aways here.

C#JavaTimings

C# and Java have no meaningful performance differences.

C# may use a lot less memory, but this is my optimized C# code and it is only beating the optimized Java by <= 20%. When does that matter?

C# and Java’s similar performance has different underpinnings

There are a lot of ways code can be fast, and I was surprised that Java and C# achieve similar speeds in very different ways.

The key advantage of the C# code was the SIMD instructions, which in theory give a ~2X speed up. However, they only win by ~20%. Why?

I think the assembly gives the answer. Just some quick snippets tell the whole story. Here is some C# code for the N-Body problem compiled to assembly:

The important insights into performance from the assembly here are:

  1. Similar instructions are stacked on top of each other, allowing for pipelining (e.g. the same vhaddpd instruction follows the same vhaddpd, and both use different registers, so can execute simultaneously).
  2. The “p” in the instructions (i.e. vhadd-“p”-d). This stands for “packed” meaning we are packing/doing two operations at once via SIMD.
  3. Only registers XMM1-XMM4 appear in the instructions. There are more registers available, and more pipelining possible, but the Mono/LLVM compiler appears to only use the low-number/scratch registers. It is easier to write compilers that obey this restriction.

Now let’s compare that to some Java assembly emitted by the Oracle runtime for the same benchmark:

The important insights here are:

  1. Java uses the version of the instructions with “s” (i.e. vmul-“s”-d) meaning single, and gets no SSE advantage. No doubt this makes writing compilers easier.
  2. However, Java is using lots of registers, XMM15 shows up!
  3. As a result of using all the registers used, the Java code has great pipelining. Note that up to 5 vmulsd instructions show up at once. The JVM is simply brilliant at packing operations together, and this means that even though I, the programmer, was smart and used SSE2, my C# code only won by 20%, and not 200%. It’s hard to beat Java pipelining.

All of which makes me wonder. What if we took the high-level language advantages of C# (low-overhead value types, easy interop via pointers, baked in generics, etc.). And combined them with the low-level advantages of the JVM (better array bounds check elimination, pipelining compiler optimizations, and maybe someday the occasional stack allocation of reference type variables…)

NuMTs, mtDNA sequencing and Aligners

There are a lot of NuMTs (nuclear encoded mitochondrial sequences) in the genome, and when the mtDNA is sequenced, so reads may align to the nuclear genome instead of the mtDNA because of this.  But how much winds up in the nuclear DNA and where does it go? To answer this, I simulated reads from a diverse collection of mitochodria, and tracked where they landed when aligned with bwa mem. The reads were simulated from the whole collection of mtDNA molecules available from phylotree, and the simulated reads were 100 bp in length, have a 1% error rate, and an insert size normally distributed around a mean of 150 bp with a std. dev. of 30 (but bounded at a minimum of 40 insert and max of 700). After simulating, I then aligned with.   And discovered that almost all reads align to the mtDNA, only 3% of reads aligned elsewhere.  As a result, the distribution of coverage depth across the whole genome is very bi-modal. Histograms showing the coverage depth distrbution of sites with data is shown below. img5 For reads that did align to the nucleus, the MAPQ was typically 0, but could be as high as 60 and had an unexplained peak at 27. img15 And below shows the normalized coverage by positions across the mtDNA, clearly some regions are more affected by NuMTs. imgD Reads from the first and last 500 bp of the mtDNA are poorly aligned by bwa.  It appears most go to chromosome 17, but their true location is belied by their mate pair. In fact only 0.6% of reads in this region that map to the nuclear DNA do not have their paired read map to the mtDNA. img19 I also wanted to see how reads that represent a heteroplasmic deletion would be handled.  I simulated reads that either spanned or included a deletion randomly chosen to be in the mtDNA, again virtually all mapped to the mitochondria, and the coverage profile looked similar to the simulation with complete reads. Perhaps most reassuringly, almost all reads are mapped. Checking for unmapped reads with the command: Showed only one un-aligned read out of the simulated millions, and this read had many errors compared with the original sequence it was simulated from. The result of all of this is one large BedFile giving the location of all possible reads from elsewhere.

The .NET Bio BAM Parser is Smoking Fast

The .NET Bio library has an improved version of it’s BAM file parser, which makes it significantly faster and easily competitive with the current standard C coded SAMTools for obtaining sequencing data and working with it. The chart below compares the time it takes in seconds for the old version of the parser and the current version to parse a 76 MB BAM file. The current parser can easily create ~150K validated sequence objects per second on the clunky old servers I typically run code on. Note that the windows and unix numbers are from completely different machines and not comparable. Also included is a comparison to a “custom” version of the parser that I wrote, which uses unsafe code, assumes the system architecture is always little endian, caches strings and does some other tricks to get some further performance improvements and reduce memory usage. img5 The comparison to samtools is based on the system time to parse the file using this task on the same unix server used for the C# tests. samtools view Test.bam | wc -l And is just meant to give a general idea of the performance comparison, as there are several important differences between the samtools and .NET Bio test. The C# version was not allowed to reuse memory for objects as it was supposed to be working as a data producer, while the Samtools version processes reads one at a time and does reuse memory. C# also made a lot of dictionaries to aid quick access to the read groups, which isn’t done by samtools. However, samtools had to write the files to the output pipe, while the C# version did not, which undoubtably introduces a fair bit of overhead for it. Both tools however, are clearly plenty fast and at this stage further performance improvements would come from lazy evaluation (or not sticking unnecessary information like the original quality scores in the BAM files!), and the language won’t matter much. Performance Comments One task when parsing BAMs is unpacking lots of information that is packed together in arrays.  In SAMtools and the current .NET Bio parser, this is done with lots of unpacking of bits by integer manipulations.  For example this code from SAMTools: Because C# has pointers and value-type structs however, I discovered that it is a lot more fun just to define a structure that contains those fields and unpack directly with a pointer cast in C#. Blam! Now all the data related to the position, bin read group is in the object with those three lines that copy the data very fast. So where are the bottlenecks remaining? On windows about a third of the time is spent doing the decompression. In Mono, because the decompression is done by zlib and not in managed code, it’s effectively free. Currently, the quality data and sequence data are passed around a bunch, and the code could likely be made about 10% faster by not copying that data but reusing a single byte array each time. However, it is so fast it hardly seems worth worrying about.

Software Patents

In the dark moments when one wonders if their research is doing enough to better the world, it’s always nice to remember you’re not a patent lawyer.  The radio program This American Life did a fantastic investigation of this issue recently, highly recommended and very entertaining, so I thought I would point it out here.  It explains how broken the system is, and although I don’t know how best to ensure people are compensated for their creative works, clearly we need better solutions.

The outcome of perceived or real patent fights basically moved me away from the Linux desktop.  In 2006 I thought the Linux desktop would become the world standard (the other options were far from awesome).  Instead the technology company that spends the least on research and development rose to prominence and essentially everyone I know who uses Linux comes from the tail end of the most affluent class of society (which probably makes them great targets for the now integrated Amazon search).  Here’s hoping this mess gets cleaned up in the next few years.

Comparing data structure enumeration speeds in C#

Determining which data structure to use for storing data involves trade-offs between how much memory they require and how long different operations, such as insertions, deletions, or searches take.  In C#, using a linear array is the fastest way to enumerate all of the objects in a collection.  However, the use of an array or list can have multiple drawbacks.  For example, Arrays and lists in C# can have frustrating size limits as objects in the CLR, regardless of system architecture, cannot be over 2 GB in size*.  Similarly, while a good data-structure for enumerations, lists and arrays are slower than other alternatives for searching.

I recently had a dataset for which the abilities to search, enumerate and hold over 2 GB of data from the collection were desirable.  These criteria naturally make a tree like data structure appealing.  However, in a high level language like C# where performance can often heavily depend on which run-time optimizations the virtual machine utilizes, determining the actual performance hit incurred by using a tree as compared to a list during an enumeration is more accurately answered with empirical data than Big O notation.

So, I tested how enumeration performance varies between a balanced binary tree and a list.  Since the tree was being enumerated by using the yield statement when a relevant node was found, I compared three methods.  One where a class containing a binary tree is enumerated with a yield, one where a class containing a list is enumerated with a yield statement (that is the class enumerates its internal list and yields each element), and one where that same class is enumerated by simply passing out the original List object.  In all cases enumerating 220 objects in total.  Here is the time comparison.

Method

Time

Balanced binary tree  with yield statement

.110 seconds

List with yield statement

.055 seconds

List  without yield statement.

.016 seconds

These values do not vary substantially based on the order of operations or repeated trials.  So, yowza,  an order of magnitude difference in enumerating the objects, which seemed far more severe than I would have imagined (I was expecting about 3X).  It appears much of this is due to the method call and overhead of the yield statement.  The two methods using the list are logically identical in code, but it seems the compiler can’t quite figure that out, leading to a 5X drop in performance.  Out of curiosity, I also compared to a dictionaries enumeration speed which took about twice as long as the fast list method, but which ran out of memory with a higher number of elements pretty readily.

My take home on this, is to only use arrays and lists in C# if the enumeration speed of a collection appears limiting.  It appears traversal of trees in the language is very slow, so trees are not generally recommended when the operations per node are fast.

 

*This may change on newer version of the MS CLR

Code Below

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Diagnostics;
using System.Globalization;

namespace TestTreeVersusList
{
    class Program
    {
        public static int TreeDepth = 24;
        public static int ElementNumber=(int)(Math.Pow(2,TreeDepth));
        static void Main(string[] args)
        {
            Stopwatch sw = new Stopwatch();
            ulong val = 0;
            ////now for the list
            val = 0;
            ListEnumerator le = new ListEnumerator();
            sw.Reset();
            sw.Start();
            foreach (node num in le.GetNodes())
            {
                val += num.value;
            }
            sw.Stop();
            Console.WriteLine("Time Taken For List with yield: " + sw.Elapsed.TotalSeconds.ToString() + " seconds");

            sw.Reset();
            sw.Start();
            foreach (node num in le.curList)
            {
                val += num.value;
            }
            sw.Stop();
            Console.WriteLine("Time Taken For List without yield: " + sw.Elapsed.TotalSeconds.ToString() + " seconds");
            le = null;

            //create a tree and a list
            BinaryTreeEnumerator tree = new BinaryTreeEnumerator();
            sw.Reset();
            sw.Start();
            val = 0;
            foreach (node num in tree.GetNodes())
            {
                val += num.value;
            }
            sw.Stop();
            Console.WriteLine("Time Taken For Tree: " + sw.Elapsed.TotalSeconds.ToString() + " seconds");
            tree = null;
            DictionaryEnumerator dict = new DictionaryEnumerator();
            sw.Reset();
            sw.Start();
            val = 0;
            foreach (var kv in dict.dict)//.dict)
            {
                val=kv;
                //val = kv.Key;
            }
            sw.Stop();
            Console.WriteLine("Time Taken For Dictionary: " + sw.Elapsed.TotalSeconds.ToString() + " seconds");
            dict = null;
        }
          }
    class node
    {
        public ulong value;
        public object o;
        public node Left;
       public node Right;
    }
    class BinaryTreeEnumerator
    {
        node root;
        int nodeCount;
        public IEnumerable<node> GetNodes()
        {
            Stack<node> traversalStack = new Stack<node>(2 * Program.TreeDepth);
            traversalStack.Push(this.root);
            while (traversalStack.Count > 0)
            {
                node current;
                current = traversalStack.Pop();
                if (current != null)
                {
                    if (current.value != 0)
                    {
                        traversalStack.Push(current.Right);
                        traversalStack.Push(current.Left);
                    }
                    yield return current;
                }
            }
            //traversalStack.TrimExcess();
        }
        //Just make a binary tree with a bunch of elements
        public BinaryTreeEnumerator()
        {
            this.root = new node() { value = 1, o = null };
            node current;
            Queue<node> traversalStack = new Queue<node>(2*Program.TreeDepth);
            traversalStack.Enqueue(this.root);
            while (traversalStack.Count > 0 && nodeCount < Program.ElementNumber)
            {                
                current = traversalStack.Dequeue();
                if (current.value != 0)
                {
                    current.Left = new node() { value = 1, o = null };
                    current.Right = new node() { value = 1, o = null };
                    nodeCount += 2;
                    traversalStack.Enqueue(current.Right);
                    traversalStack.Enqueue(current.Left);
                }
            }
        }
    }
    class ListEnumerator
    {
        public List<node> curList;
        public ListEnumerator()
        {
            curList = new List<node>(Program.ElementNumber);
            for(int i=0;i<Program.ElementNumber;i++)
            {
                 curList.Add(new node(){value=1,o=null});
            }
        }
        public IEnumerable<node> GetNodes()
        {
            foreach (node n in curList)
            {
                 yield return n;
            }
        }
    }
    class DictionaryEnumerator
    {
        public Dictionary<ulong,node> dict = new Dictionary<ulong,node>();
        public DictionaryEnumerator()
        {
            for (ulong i = 0; i < (ulong)Program.ElementNumber; i++)
            {
                dict[i] = new node() { value = i, o = null };
            }
        }
    }
}
Technorati Tags: ,,