SuiteSparse vs MATLAB built-in solvers (a quick geometry processing benchmark)

December 16th, 2014

Recently, a student I’m working with installed the SuiteSparse toolbox for matlab and reported wild speedups (20x) over matlab’s built-in backslash (\ or mldivide) linear system solver. I thought surely precomputing a cholesky factorization with good ordering (chol with 3 output params) will fair better. But the student was claiming even 5-10x speed ups over that.

Too good to be true? I decided to do a quick benchmark on some common problems I (and others in geometry processing) run into.

I started with solving a Laplace equation over a triangle mesh in 2d. I generated Delaunay triangle meshes of increasing resolution in the unit square. Then I solved the system A x = b in three ways:

\ (mldivide):

x = A\b;

chol + solve:

[L,~,Q] = chol(A,'lower');
x = Q * (L' \ (L \ ( Q' * b)));

cholmod2 (SuiteSparse):

x = cholmod2(A,b);

For the Laplace equation with a “single right hand side” (b is a n long column vector), I didn’t see any noticeable improvement:

matlab vs suitesparse 2d harmonic

The only thing surprising here is actually how handily backslash is beating the other two.

Laplace operators in 2D are very sparse matrices (~7 entries per row). Next I tried solving a bi-Laplace equation which has the sparsity pattern of squaring the Laplace operator (~17 entries per row). Here we see a difference but it doesn’t seem to scale well.

matlab vs suitesparse 2d biharmonic

The sparsity patterns of Laplace and bi-Laplace operators are larger in 3d. In particular a nicely meshed unit cube might have upwards of 32 entries per row in its bi-Laplacian. Solving these systems we start to see a clear difference. cholmod2 from SuiteSparse is seeing a 4.7x speed up over \ (mldivide) and a 2.9x speed up over chol+solve. There is even the hint of a difference in slope in this log-log plot suggesting SuiteSparse also has better asymptotic behavior.

matlab vs suitesparse 3d biharmonic

So far we’re only considering a single right hand side. Matlab’s parallelism for multiple right hand sides (during “back substitution”) seems pretty pitiful compared to SuiteSparse. By increasing the number of right sides to 100 (b is now n by 100) we see a clear win for SuiteSparse over matlab: 12x speed up over \ (mldivide) and a 8.5x speed up over chol+solve. Here the asymptotic differences are much more obvious.

matlab vs suitesparse 3d biharmonic 100

My cursory conclusion is that SuiteSparse is beating matlab for denser sparse matrices and multiple right-hand sides.

Coffee Shop Compiler

December 15th, 2014

Here’s a proposal for a compiler/build system for developers like myself who often find themselves coding in a coffee shop with strong wifi connection, but poor power outlet availability and limited battery life.

The standard coding and debugging cycle looks like:

  1. Code for a bit
  2. Think for a bit
  3. Repeat steps 1 and 2 for a bit
  4. Compile (make)
  5. If compile failed return to step 2
  6. Run executable
  7. Return to step 2

The power bottleneck (for my typical cycle) is by far step 4. My idea is to outsource compiling to a server (plugged into a wall outlet). The naive implementation would change the cylce to:

  1. Code for a bit
  2. Think for a bit
  3. Repeat steps 1 and 2 for a bit
  4. compile on server
    1. send code to server
    2. compile on server
    3. if compile failed, send back errors and return to 2
    4. send back executable
  5. Run executable
  6. Return to step 2

A better implementation might roll git into the make command on the local machine:

  1. Code for a bit
  2. Think for a bit
  3. Repeat steps 1 and 2 for a bit
  4. special make
    1. git commit; git push on client
    2. git pull on server
    3. regular make on server
    4. if compile failed, send back errors and return to 2
    5. send back executable
  5. Run executable
  6. Return to step 2

An even fancier implementation might try to keep the changing files in sync during idle time in step 2.

I guess this is assuming my executable is relatively small. Here dynamic (shared) libraries would be especially handy.

MATLAB high idle CPU usage

December 14th, 2014

Using my laptop at a coffee shop today I was dismayed to find my battery down to 85% after just 15 minutes. The culprit seems to be that MATLAB’s idle IDE was keeping my CPU busy (6-15%) and the bug is well known. The fix was to issue:

com.mathworks.mlwidgets.html.HtmlComponentFactory.setDefaultType('HTMLRENDERER');

and restart matlab. Now it’s down to around 1% idle CPU usage.

Automatic Differentiation Intuition Dump

December 13th, 2014

Every so often I re-read the wikipedia page for automatic differentiation and get confused about forward and reverse accumulation. Both are neat and have appropriate and inappropriate applications. There are many tutorials online, and in addition here’s my intuition.

Forward accumulation

At each step of computation, maintain a derivative value. We seed each initial variable with derivative 1 or 0 according to whether we’re differentiating with respect to it.

Augmenting numerical types with a “dual value” (X := x + x'ε) such that ε*ε=0and overloading math operations is an easy way to implement this method.

For f:R→Rⁿ and n>>1 this is ideal since we end up computing and storing 1 value at
each computation step. If there are m computation variables then we track m derivatives. Work and memory is O(m) to get the n-long vector derivative.

For f:Rⁿ→R this is not ideal. To take a gradient we need to store n derivatives for each computation variable or sweep through the computation n times: O(mn) work.

Backward accumulation

At each step of computation, maintain the current step’s gradient with respect to its inputs and pointers to its input variables. When retrieving derivatives, evaluate the outermost gradient apply the chain rule recursively to its remembered arguments.

Can also implement this with a special numerical type and mathematics operation overloading. This type should maintain the entire expression graph of the computation (or at least store the most immediate computation and live pointers to previous computation variables of the same type), with gradients also provided by each mathematical operation. I suppose one way to implement this is by altering math operations to augment their output with handles to functions computing gradients. Traditionally compilers should be bad at evaluating this stored expression graph, but I wonder if modern compilers with inline function optimization couldn’t optimize this?

In any case, for f:Rⁿ→R and n>>1 this is ideal since a single derivative extraction traversal involving m computation
variables will (re)visit each computation variable once: O(m).

For f:R→Rⁿ this is not ideal. At each computation variable we need to store n derivatives and keep them around until evaluation: O(m*n) memory and work. Whereas forward accumulation just tracks n values across m computation variables: O(m) memory.

A Simple Method for Correcting Facet Orientations in Polygon Meshes Based on Ray Casting

November 23rd, 2014

correct facet orientations on a bike, cell phone, chair, and truck

We’ve finally published our paper A Simple Method for Correcting Facet Orientations in Polygon Meshes Based on Ray Casting in the Journal of Computer Graphics Tools. The paper was written by Kenshi Takayama, Alec Jacobson, Ladislav Kavan, and Olga Sorkine-Hornung.

Abstract: We present a method for fixing incorrect orientations of facets in an input polygon mesh, a problem often seen in popular 3D model repositories, such that the front side of facets is visible from viewpoints outside of a solid shape represented or implied by the mesh. As opposed to previously proposed methods which are rather complex and hard to reproduce, our method is very simple, only requiring sampling visibilities by shooting many rays. We also propose a simple heuristic to handle interior facets that are invisible from exterior viewpoints. Our method is evaluated extensively with the SHREC Generic 3D Warehouse dataset containing 3168 manually designed meshes, and is demonstrated to be very effective.

You can find an implementation of this method in the embree extension of libigl: igl/embree/reorient_facets_raycast.h. If you store your mesh by its vertices in rows of a #V by 3 matrix V and triangle indices in the rows of a #F by 3 matrix F. Then you can quickly reorient your triangles to all point outward consistently with a single function call:

#include <igl/embree/reorient_facets_raycast.h>
...
Eigen::MatrixXI FF; // #F by 3 list of output triangle indices, some rows potentially reversed
Eigen::VectorXi I; // #F list of booleans revealing whether facet was reversed
igl::reorient_facets_raycast(V,F,FF,I);

As a preprocessor to our generalized winding numbers, this forms a powerful tool for determining the inside from outside for arbitrary meshes. This is especially important for creating volumetric tetrahedral meshes.

Tangible and modular input device on Swiss TV

November 17th, 2014

input device on tv

Our input device was featured on the Swiss television show “Gadgets” on the SRF channel. They’re using it to control a robot made out of a Ken Barbie doll.

Drop-in replacement header for switching from MathJax to KaTeX in markdown

November 7th, 2014

I wanted to try out switching to KaTeX from MathJax for our libigl tutorial. KaTeX needs a little bit of javascript besides the including the script to know to iterate over all class=math tags and render the latex code.

Also, it seems that markdown translates \\(x+y\\) into <span class=math>\(x+y\)</span> and the \( parenthesis throw off the KaTeX. You get an error like:

ParseError: KaTeX parse error: Expected 'EOF', got '\(' at position 2: \(x+y

So my header to switch from mathjax simply iterates over all math tags on document load using jquery and strips these parenthesis (which I expect on all tags):

<link rel="stylesheet" href="http://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.1.1/katex.min.css">
<script src="http://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.1.1/katex.min.js"></script>
<script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script src="http://code.jquery.com/jquery-migrate-1.2.1.min.js"></script>
<script>
$(function(){
  $(".math").each(function() {
    var texTxt = $(this).text();
    texTxt =  texTxt.slice(2,-2);
    el = $(this).get(0);
    if(el.tagName == "DIV"){
        addDisp = "\\displaystyle";
    } else {
        addDisp = "";
    }
    try {
        katex.render(addDisp+texTxt, el);
    }
    catch(err) {
        $(this).html("<span class='err'>"+err);
    }
  }); 
}); 
</script>

I didn’t invest much more into bulletproofing this because it seems like KaTeX is still in its infancy in terms of LaTeX support. It doesn’t support unicode characters (e.g. ∑, ∆, π) or common tags like \mathbf or \begin{cases}. Hopefully it will eventually, it is much faster than MathJax for the simple equations.

Using two arrays to store undirected edges instead of a map

November 7th, 2014

I profiled some of the libigl code the other day and found a performance leak corresponding to the way I sometimes dealt with undirected edges. In a triangle mesh, each oriented triangle has three directed edges. It’s sometimes useful to find other triangles with the same edges. So if my first triangle has an edge {i,j} I may want to find all other triangles who have either also an edge {i,j} or have the reverse edge {j,i}. The slow way of storing such a relationship is with a std::map or std::unordered_map. For example you might write,

std::map<std::pair<int,int>,std::vector<int>> edge_to_faces;
for each face f in F
{
  for each edge {i,j} in F(f)
  {
    if(i<j)
    {
      edge_to_faces[make_pair<int,int>(i,j)].push_back(f);
    }else
    {
      edge_to_faces[make_pair<int,int>(j,i)].push_back(f);
    }
  }
}

You can do slightly better using an std::unordered_map (aka hash map) and writing a compress(i,j) function which maps your {i,j} or {j,i} pair to a single number.

Turns out it’s much faster to build and to use two arrays. One which maps each directed edge instance to a unique undirected edge and then an array for the data stored at each undirected edge. Libigl provides a way of getting this first array via the igl::unique_edge_map function:

igl::unique_edge_map(F,E,uE,EMAP,uE2E);

Then the previous example looks like:

std::vector<int,std::vector<int>> uedge_to_faces;
for each face f in F
{
  for each cth edge {i,j} in F(f)
  {
    uedge_to_faces[EMAP(f+c*m)].push_back(f);
  }
}

Storing EMAP and uedge_to_faces is technically more memory than just edge_to_faces, but not asymptotically more if the data we’re storing for each undirected is O(#F), as is the case here. I think this boils down to memory access. The maps access is scattered but the array is contiguous. The price to build EMAP is a sort: O(m log m), but it can be down in place once and for all, where as the map needs to maintain a sort while building and conduct O(log m) searches when accessing.

Robust mesh boolean operations in libigl, gptoolbox

November 4th, 2014

I’ve added robust mesh boolean operations to libigl and a mex wrappers for matlab in gptoolbox. For comparison and as an alternative, I also included new wrappers cork’s boolean operations.

Check out the boolean entry in the libigl tutorial.

mesh booleans on cheburashka and knight

Linker order error with matlab dynamic libraries and gcc on mac os x

November 3rd, 2014

I had a nasty time tracking down a runtime error in our libigl tutorials. It seems that the newest version of matlab comes with some dynamic libraries which do not agree with the stdc++ library of my gcc compiler. I had a link command that looked something like this:

/opt/local/bin/g++ -std=c++11
  /Applications/MATLAB_R2014b.app/bin/maci64/libmex.dylib \
  /Applications/MATLAB_R2014b.app/bin/maci64/libmx.dylib \
  ...

Here’s the debugger output of the error I got at runtime

* thread #1: tid = 0x153aa3, 0x00007fff86d08866 libsystem_kernel.dylib`__pthread_kill + 10, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
  * frame #0: 0x00007fff86d08866 libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007fff8341f35c libsystem_pthread.dylib`pthread_kill + 92
    frame #2: 0x00007fff8571fb1a libsystem_c.dylib`abort + 125
    frame #3: 0x00007fff8b1c007f libsystem_malloc.dylib`free + 411
    frame #4: 0x00000001001d47a0 libmex.dylib`std::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >::~basic_stringbuf() + 96
    frame #5: 0x0000000100451da2 libstdc++.6.dylib`std::basic_ostringstream<char, std::char_traits<char>, std::allocator<char> >::~basic_ostringstream() + 36

Seems like libmex and libmx were the culprits. I fixed this problem by telling the linker to find the stdc++ library before the matlab dynamic libraries:

/opt/local/bin/g++ -std=c++11
  -lstdc++ \
  /Applications/MATLAB_R2014b.app/bin/maci64/libmex.dylib \
  /Applications/MATLAB_R2014b.app/bin/maci64/libmx.dylib \
  ...