Sketch Cleanup
The goal of the sketching interface is to mimic an artist's natural sketching gesture. When creating a curve on paper, an artists often draw multiple strokes with varying strengths. The software must cleanup these strokes and produce a smooth curve.
There are a lot of prior works in 2D sketch recognition and cleanup. Most methods are designed to process a complete sketch where all the strokes are drawn. Therefore, they employ heavy machinery to produce a clean curve network, with the cost of performance. In contrast, we only need to recognize one continuous curve at a time, and thus can design a more performant pipeline.
The overall stroke cleanup process works as follows:
- For each stroke, we resample the stroke to store a smaller subset of points that better capture the geometry of the stroke.
- We reorder the resampled point set such that the points are ordered to produce a continuous curve that best capture the underlying geometry. This is effectively a dimension reduction problem, where we convert a 2D point set into a 1D line.
- We fit a sequence of cubic Bezier curves on the ordered points.
Resample Stroke Points
When the user creates a stroke, the device records a sequence of 2D points. It is unnecessary to store all the 2D points in memory, and thus we resample the points to select a subset that keeps the general geometry of the stroke. In particular, we use the Ramer-Douglas-Peucker (RDP) algorithm.
Intuitively, the RDP algorithm tries to approximate the points with a line from the first to the last point, and only select the point that is farthest away from that line approximation. Of course this is a poor approximation of anything that is not a line, so we recursively divide the points at the farthest point, and stop until the point set becomes a line. More formally:
- We add the first point and the last point in to the result set.
- We divide into two subsets and at index . The index is the index of the point that is farthest away from the line from to .
- If the distance from to is greater than a parameter , then we add to the result set, and run the algorithm recursively on and . Otherwise, we stop the recursion, since the point set is approximately a line.
This algorithm can effectively retain the shape of a point set while reducing the redundant points. We store the resampled points into a tree data structure to perform fast spatial queries.
Iterative Local Search
After the user completes multiple strokes, we obtain a set of unordered points. Since we intend to fit a sequence of Bezier curves over the points, we must first first order these points. The first ordering strategy that we employ is iterative local search. The idea is to mimic an ant jumping between the points: Starting from any point in , fit a tangent line over the neighborhood of , project all points onto the line, find the points that have the largest projection, and recursively search from these points. This process is similar to how an ant samples its environment and follows the direction of the strongest smell. More formally:
- Given point , find its neighbor points .
- Fit a line over all points in , and deduce a tangent vector .
- For each , find . Then we recursively run the algorithm on until we visit all vertices.
To discourage the ordered points from making sharp turns, we also add a penalty to all points that projects poorly onto the previous tangent direction.
Diffusion Map
One problem of iterative local search is that, if the algorithm makes a mistake any step along the process, for example going on a path that leads to a dead end, then there is no way to backtrack and correct the steps. To improve the algorithm, we need to use a global search approach. Therefore, we implement a diffusion map algorithm for strokes that are difficult to order. In essence, the diffusion map uses random walks to uncover the underlying geometry of the point cloud. For each point, imagine we start a random walk there and, for each step, jump to a neighbor with some probability. Then after a million steps, we get a probability cloud spreading over all points. The diffusion map essentially maps every point to its probability cloud in the eigen-basis of the random walk matrix. More formally,
- Define a weight matrix where is the weight between point and point . In our case, the weight is the Gaussian of the L2 norm, namely , where controls the radius of the neighborhood.
- Normalize each row of to get the Markov matrix , i.e. where is a diagonal matrix of the row sum of . Note that is the probability of jumping from point to point at every step.
- Note that it's not guaranteed that the Markov matrix is symmetric. Therefore, we find its conjugate , and get the eigenvalues and eigenvectors of such that . Therefore, . We define the right eigenvector and left eigenvector , and each row of the Markov matrix can be written as .
- Since the -th row of is the probability distribution of the random walk after steps, then we can write the -th vertex as , where we sort the entries by the eigenvalue in decreasing order. Note that the first eigenvalue is always 1, which is a trivial state, so we ignore it. In addition, the eigenvalues toward the end is negligible, especially when becomes large. Therefore, we can ignore them as well. Therefore, we map the -th vertex to , where is the target dimension. In our case, we set .
Finally, after we project the original points into 2D, we can easily check if the point cloud is closed or not. If the point cloud is closed, then the projection onto the eigen-basis should be shaped like a circle. Otherwise, the projection should shape like a line.
The final problem is to deal with self-intersection on the strokes. An effective solution is to modify the distance function to include differences in tangent direction for each point. Basically, for each point, we find its local tangent vector via Principal Component Analysis (PCA), and then compare the tangent vectors in addition to the positions.