Zhigang Deng, Ladislav Kavan, JP Lewis and I are teaching a course this week at siggraph.

We’ll cover a wide variety of skinning related techniques and we’re hosting our notes at skinning.org. Check it out and tell us what you think!

In a few words, explain what this blog is about…or else

Zhigang Deng, Ladislav Kavan, JP Lewis and I are teaching a course this week at siggraph.

We’ll cover a wide variety of skinning related techniques and we’re hosting our notes at skinning.org. Check it out and tell us what you think!

Update:Hilariously, it turns out that either wordpress or my wordpress markdown plugin is choking on the emoji inserted into this post. Thus, to see the actually emoji commands see this plaintext version

We were recently joking around about using emoji in math equations. The idea was to satire of the bad rap exterior calculus symbols like the the Hodge star operator (★) and the “musical isomorphisms” (♭,♯) get in the computer graphics community.

I found a solution for `upTeX`

. This works by first extracting all of the emojis as pdfs and then including the pdfs via (`includegraphics`

) whenever a `\coloremoji{...}`

command is found. This unfortunately did not work with my TexLive `pdflatex`

setup. With some help, I’ve redesigned a `coloremoji.sty`

stylesheet that allows you to *directly* include emoji in your LaTeX documents.

A Hello, EmojiWorld LaTeX document would look like this:

WordPress dies on emoji, see plaintext version

This produces something that should look like:

You can also use emoji in mathmode:

WordPress dies on emoji, see plaintext version

Download the coloremoji package and simply add `\usepackage{coloremoji}`

to the top of your document.

Actually, Nobuyuki Umetani gave a talk where he used graphic icons in math (I believe successfully!) to explain Sensitive Couture (change in clothing with respect to change in mouse interaction):

Upon implementing the modal analysis part of “Shape Decomposition Using Modal Analysis” [Huang et al. 2009], I noticed a small typo in the derivation.

However, I ran into a small issue regarding the construction of matrix C in Equation 6. C_i is responsible for the quadratic terms involving c_i. Namely,

```
∑j_N(i) wij || c_i x (pi - pj) ||^2
```

The squared norm of a cross product can be written as the sum of the product of individual squared norms and the squared dot product: `||a x b||^2 = ||a||^2 ||b||^2 - (a.b)^2`

. Or in our case:

```
∑j_N(i) wij || c_i ||^2 || pi - pj ||^2 + wij (ci . (pi-pj) )^2
```

Now, in the 2009 paper only the latter term is included in C_i as it is equivalent to the covariance matrix (sum of outerproducts):

```
ci' C_i ci = ci' ( ∑j_N(i) wij (pi-pj) (pi - pj)' ) ci
```

where a’ is the transpose of a.

C_i should also contain a diagonal term accounting for `∑j_N(i) wij || c_i ||^2 || pi - pj ||^2`

. This is of course easy to add to C_i.

**Update**: Huang replied to me with another way to rewrite the construction C_i correctly and using a notation more familiar to the original paper:

```
∑j wij (p_{i}-p_{j})_x (p_{i}-p_{j})_x',
```

where `a_x`

is the matrix representing cross product with a and it follows that `a_x a_x' = ||a||^2 - (aa').`

And then it’s working. Here’re some modes of a cow:

Our vacuum keeps blowing the fuse in our apartment and each time *the internet* goes down. I haven’t tracked down why this happens, but here’s my solution after a couple times.

- Open http://192.168.1.1/WanMAC.asp (router’s configuration page) and note down MAC Address currently cloning
- Unplug power to router and modem and wait 15 seconds
- Plug power to router and modem in again
- Unplug ethernet from the linksys router
- Connect original computer (with MAC address above) directly to router using ethernet, check that some page loads
- Restore Linksys to factory settings using little button on back
- Connect to restored wireless router (it should have name
`linksys`

) - Open http://192.168.1.1/WanMAC.asp and enable MAC address cloning with previously noted address
- Open http://192.168.1.1/Wireless_Basic.asp and restore wireless name
*exactly* - Reconnect to router wireless router using new name
- Open http://192.168.1.1/WL_WPATable.asp and set up password again
*exactly*: choose WEP, then type password as`Key 1`

- Once again reconnect to wireless router using new name, your machine should remember that it used to remember this password if it’s the same

Recently I needed to invert (a.k.a. solve against) a large matrix with a special form. It was a block matrix where each block was diagonal:

```
M = [M11 M12 ... M1n
M21 ...
...
Mn1 Mn2 ... Mnn];
```

where each `Mij`

was a diagonal matrix. In my case `Mij=Mji`

, though I won’t rely on this for the following.

Note that this is not the same as a “block diagonal matrix” though it can be rearrange into such a matrix.

The inverse of my kind of matrix will have the same sparsity pattern. I can find it using a recursive application of the blockwise matrix inversion formula.

Here’s a little function to do exactly that:

```
function N = diag_blk_inv(M,k)
% DIAG_BLK_INV Invert a block matrix made out of square diagonal blocks. M
% should be of the form:
%
% M = [M11 M12 ... M1n
% M21 ...
% ...
% Mn1 Mn2 ... Mnn];
% where each Mij is a diagonal matrix.
%
% Inputs:
% M n*k by n*k matrix made out of diagonal blocks.
% Outputs:
% N n*k by n*k matrix inverse of M
%
switch k
case 1
assert(isdiag(M));
N = diag(sparse(1./diag(M)));
return;
end
assert(size(M,1)==size(M,2),'Must be square');
assert(rem(size(M,1),k)==0,'Must be divisble by k');
n = size(M,1)/k;
% Extract 4 blocks
A = M(0*n+(1:(k-1)*n),0*n+(1:(k-1)*n));
B = M(0*n+(1:(k-1)*n),(k-1)*n+(1:n));
C = M((k-1)*n+(1:n),0*n+(1:(k-1)*n));
D = M((k-1)*n+(1:n),(k-1)*n+(1:n));
assert(isdiag(D));
% https://en.wikipedia.org/wiki/Invertible_matrix#Blockwise_inversion
Ainv = diag_blk_inv(A,k-1);
% Schur complement
S = (D-C*Ainv*B);
assert(isdiag(S));
Sinv = diag_blk_inv(S,1);
N = [Ainv + Ainv*B*Sinv*C*Ainv -Ainv*B*Sinv; ...
-Sinv*C*Ainv Sinv];
end
```

**Update:** No doubt someone will see that I’m computing the inverse explicitly and call me a heretic. Actually in the case of diagonal matrices there’s some room for argument why one might want to do this. In terms of efficiency, this is orders of magnitude faster than calling `inv(M)*B`

or `M\B`

. However, matlab’s `\`

doesn’t seem to figure out that it could use cholesky decomposition. In that case `L = chol(M);L\(L'\B)`

is roughly the same speed as `diag_blk_inv(M,k)*B`

: both are fast.

**Update:** To be my own policeman, it probably is safer to use `chol`

in this case. Otherwise I would in general I would need to check the scaling of `M`

. It’s unfortunate though because *a priori* I don’t have any reason to believe `chol`

will see the special structure of this matrix.

Dealing with xyz coordinates in linear algebra for computer graphics, there are predominant two styles when concatenating a list of many positions/vectors/matrices. The first groups common coordinates together. For example a list of mesh vertex coordinates may look like:

```
V = [x1
x2
...
xk
y1
y2
...
yk
z1
z2
...
zk];
```

The other style groups common objects together, listing all coordinates in a stream. For our vertex positions this would look like:

```
V = [x1
y1
z1
x2
y2
z2
...
xk
yk
zk];
```

In OpenGL and array language the first option has a data access stride of 1 if pulling x-coordinates and the second option has a stride of n. Of course if pulling position vectors then the first option has a stride of n and the second a stride of 1. Hence, the two styles really have appropriate use cases depending on data access patterns.

This gets a bit more complicated if we’re not just storing a scalar in each row, but a n-vector.

In matlab, it’s very easy to convert between one representation and the other. Suppose you have a k*m-by-n matrix A stored in the first format above. We can rearrange it to move all common objects together (forming a stack of k m-by-n matrices), with:

```
A = [1 2 3; ...
4 5 6; ...
7 8 9; ...
10 11 12; ...
13 14 15; ...
16 17 18; ...
19 20 21; ...
22 23 24];
n = 3;
B = reshape(permute(reshape(A,[],2,n),[3 2 1]),n,[])';
```

and the output is

```
B =
1 2 3
13 14 15
4 5 6
16 17 18
7 8 9
19 20 21
10 11 12
22 23 24
```

To convert back:

```
C = reshape(permute(reshape(B',n,2,[]),[3 2 1]),[],n);
```

and we get again:

```
C =
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
16 17 18
19 20 21
22 23 24
```

Daniele and I have completed a working tutorial for libigl. The course at SGP 2014 also encouraged us to advertise a proper Version 1.0 Beta release of libigl.

This change marks our confidence that libigl is ready for wide use in geometry processing research. While we still qualify our release with “Beta” to indicate that the library is a work in progress and documentation, for example, may not be as thorough as commercial or better established libraries, we’re continuing to develop libigl and encourage you to `git pull`

our repo and stay up to date.

Here’s the table of contents to our tutorial. Each section comes with a working mini example `main.cpp`

which demonstrates a specific function or routine available in libigl:

- Chapter 1: Introduction to libigl
- Chapter 2: Discrete Geometric Quantities and

Operators - Chapter 3: Matrices and Linear Algebra
- Chapter 4: Shape Deformation
- Chapter 5: Parametrization
- Chapter 6: External libraries
- Chapter 7: Outlook for continuing development

Upon implementing the Intrinsic Delaunay Laplacian according the helpful notes in “An Algorithm for the Construction of Intrinsic Delaunay Triangulations with Applications to Digital Geometry Processing” [Fisher et al 2006], I noticed a small typo. Specifically I’m following this version of the document. The issue is regarding the computation of the length of the “flipped edge” f. Currently the last step in section 2.2 uses the Law of Cosines, writing that:

```
f = sqrt(b^2 + c^2 - 2 cos(alpha + delta))
```

This should be:

```
f = sqrt(b^2 + c^2 - 2 b c cos(alpha + delta))
```

Here’re some notes to avoid (common? obvious?) pitfalls when implementing conformalized mean curvature flow (“Can Mean-Curvature Flow Be Made Non-Singular?” [Kazhdan et al. 2012]).

Naively building a smoothing operation out of the identity matrix (data term) and a discrete pointwise Laplacian (smoothness term) would lead to iterations looking something like this:

```
L = cotmatrix(V,F);
I = speye(size(L));
U = V;
while true
M = massmatrix(U,F,'barycentric');
U = (I-delta*M\L)\U;
tsurf(F,U)
drawnow;
end
```

This is vary badly scaled as we need to first invert the mass matrix, introducing numerical problems if our mesh is irregular, and then solve the system.

Algebraically we can fix these issues by multiplying both sides of the system by our mass matrix:

```
L = cotmatrix(V,F);
U = V;
while true
M = massmatrix(U,F,'barycentric');
U = (M-delta*L)\(M*U);
tsurf(F,U)
drawnow;
end
```

This solution might converge for very small delta values, but eventually the surface positions U will converge on a single point and our floating point representation will run out of precision. In our context, the problem will be that entries in our diagonal mass matrix will degenerate because triangles in our mesh are degenerating.

So for numerical reasons we must normalize the positions at every step. There are various ways we could do this, here I show normalizing the surface to have unit surface area:

```
L = cotmatrix(V,F);
U = V;
while true
M = massmatrix(U,F,'barycentric');
U = (M-delta*L)\(M*U);
U = U/sqrt(sum(doublearea(U,F))*0.5);
tsurf(F,U)
drawnow;
end
```

This is better, but still might only converge for small delta values and even then eventually diverge. The problem this time is a bit more subtle. Our mesh is unit area, but “floating” in space. Importantly, it’s floating away from the origin. Thus to store the coordinates, we’re *wasting* precision on each vertex by also storing this global translation. We eventually run into the same problem as before. Thus we should *recenter* the mesh around the origin at each step. Again, many ways to do this, but I show centering the centroid:

```
L = cotmatrix(V,F);
U = V;
while true
M = massmatrix(U,F,'barycentric');
U = (M-delta*L)\(M*U);
U = U/sqrt(sum(doublearea(U,F))*0.5);
U = bsxfun(@minus,U,centroid(U,F));
tsurf(F,U)
drawnow;
end
```

Depending on the quality of the original mesh we should now be able to take quite huge time steps and still see convergence (e.g. `delta = 1e12`

!).

**Important:** Normalizing the surface area and recentering at the origin are necessary for numerics, not just visualization.

**Note:** Convergence means that our mesh flows to the sphere. However, the mesh may still *slide around* on the sphere. The problem being that any Möbius transformation is *allowed* by the flow. Eventually vertices may group up at a pole on the sphere and our mass matrix will degenerate.

This is code I end up re-writing every once in a while to demonstrate k-harmonic deformation on a simple 2d domain. The domain is just a square. This codes meshes the domain so that two inner circles show up in the edge set. Then I can select the vertices on and outside the outer circle and on and inside the inner circle to create a bump.

```
R = 1;
r = 0.15;
outer_sam = 200;
inner_sam = ceil(outer_sam/R*r);
inner_theta = linspace(0,2*pi,inner_sam+1)';
outer_theta = linspace(0,2*pi,outer_sam+1)';
P_inner = r*[cos(inner_theta(2:end)) sin(inner_theta(2:end))];
P_outer = R*[cos(outer_theta(2:end)) sin(outer_theta(2:end))];
E_inner = [1:size(P_inner,1);2:size(P_inner,1) 1]';
E_outer = [1:size(P_outer,1);2:size(P_outer,1) 1]';
P_bb = 1.1*[-R -R;-R R;R R;R -R];
E_bb = [1 2;2 3;3 4;4 1];
[P_bb,E_bb] = resample_polygon(P_bb,E_bb,2*pi*R/outer_sam);
P = [P_inner;P_outer;P_bb];
E = [E_inner;size(P_inner,1)+[E_outer;size(P_outer,1)+E_bb]];
[V,F] = triangle(P,E,[],'Flags',sprintf('-Yq32a%0.17f',(2*pi*R/outer_sam)^2),'Quiet');
tsurf(F,V)
```

Dear spammers, you may enjoy posting on auto blog instead