Professional Documents
Culture Documents
September 3, 2009
1 Introduction
From their introduction M oving Least S quares functions have been used for a multi-
tude of topics, ranging from computer graphics to scientic simulation areas where PDE
solving is demanded [1, 2, 3, 4].
MLS is a kind of interpolation function, although many (including the author) do
approximant, for reasons that will be explained shortly. Authors with a
prefer to name it
background on nite elements also call MLS a shape function. An interpolation function
can be seen as an artifact that, given a nite set of points of certain euclidean space Rµ
and values at those points K = {(κ1 , v1 ) , (κ2 , v2 ) , . . . , (κn , vn )} can breed a soft and
µ
continous function φ(x), with x ∈ R which, at each of the κi points takes the value
vi . For clarity in the rest of this writing, I will designate κi points as key points (see
gure 2).
The interpolation artifact is most useful when it have a linear structure:
n
X
φ(x) = θk ψk (x) (1)
k=1
1
key point specification and values
2.4
2.2
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4 1
0.2 0.8
0.6
0
0.2 0.4
0.4
0.6 0.2
0.8 0
1
1.8
1.6
2.5 1.4
1.2
2 1
0.8
1.5 0.6
0.4
1 0.2
0.5
1
0.8
0 0.6
0
0.2 0.4
0.4 0.2
0.6
0.8 0
1
3
2.5
3
2
2.5 1.5
1
2
0.5
1.5 0
-0.5
1
0.5 1
0.8
0 0.6
0
0.2 0.4
0.4 0.2
0.6
0.8 0
1
Figure 1: The concept behind approximation: get a soft surface as proximal as possible
to the given values. Top: Points κk and their corresponding value vk . Middle:
An MLS approximation surface built using a linear polynomial basis. Bottom:
An MLS approximation surface built using a quadratic polynomial basis. Note
how the quadratic surface is a better t.
2
Approximant key points and their influence radius Points violating the pu law
1.5 1.5
influence radius influence radius
key points Good points are green
1 1
ϰ4 ϰ3
ϰ5
ϰ6
0.5 ϰ1 0.5
ϰ0
ϰ2
0 0
-0.5 -0.5
-0.5 0 0.5 1 1.5 -0.5 0 0.5 1 1.5
1 1
0.5 0.5
0 0
-0.5 -0.5
-0.5 0 0.5 1 1.5 -0.5 0 0.5 1 1.5
Figure 2: Upper-left : Key points and their inuence domain. Upper-right, lower-left and
lower-right: Zones where the MLS (by gure, number of terms in polynomial
basis of two (2), three (3) and four (4), observe that a quadratic basis in two
dimensions contains six (6) terms) is dened (green), where is not (red) and
where even LU decomposition explicitly fails (black).
3
The consistency degree of an interpolant is the order of the complete basis of polyno-
mials that it can interpolate exactly. Formally [5] a set of functions {φi (x)} is consistent
of order n if the following condition is met:
X
φi (x)p(xi ) = p(x) (2)
i
among others, meshless methods. The detailed formulation of MLS has been covered
elsewhere , anyway, the reader can nd a conventional deduction at section 3, with
special emphasis on formulas for numeric calculation.
• A weight function with compact support, which will be used to shape the inuence
domain of key points. This function should be dened from Rn to R, but in practice
it can be constructed applying some kind of extending functional to a regular uni-
dimensional function. See gure 3.
The Moving Least Squares interpolant is expensive to compute. Several matrix multi-
plications have to be performed, and the construction of the matrices themselves. Each
matrix multiplication requires r·s·t scalar multiplications, where r, s, t are the dimensions
of the multiplied matrices. Several things can be done to slightly improve performance.
First, the locality of the MLS approximant, given by the property of compactness can
be exploited. That amounts to including only those key points such that their inuence
domain covers the evaluation point x0 .
1
i.e., the timing of each operation
4
Quartic spline
1
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
(
1 − 6s2 + 8s3 − 3s4 if |s| ≤ 1
Figure 3: Quartic spline is dened as w4 (s) = Top:
0 if |s| > 1
Simple uni-dimensional plot of the quartic spline. Left: Square extension of
the quartic spline by isothetic multiplication Right: Extension of the quartic
spline by rotation around the z axis. The dierences between left and right are
better observed in the contour plot below each subgure.
5
# ...
# ...
Retrieve key
The point where def main(): points from some
the interpolator is place The polynomial basis, in two (2)
evaluated key_points = getKeyPoints()
dimensions, with the three (3) first terms of
eval_point = Vector2( 0.5, 0.5 ) the two-dimensional polynomial basis 1, x,
The radius of the pbasis = PolynomialBasis( 2, 3 ) y
influence domain influence_radius = 0.5
of each keypoint
centered_spherical_weightfunction = SphericWeightFunction(
QuarticSpline,
Vector2(0.0, 0.0),
influence_radius Construction of a
) multi-dimensional
new_config = GlobalInfo( weight function from
The three an unidimensional
influence_radius, spline.
“ingredients” for
configuring MLS centered_spherical_weightfunction,
interpolant pbasis
Feed the key points to the
) context factory. This enables
Create a “point evaluation” cf = MlsContextFactory( new_config ) the calculation of magnitudes
object which can reuse ctx = cf.apply( key_points ) which do not depend on the
intermediate results obtained evaluation point.
when evaluating the
magnitude for derivatives point_evaluation = ctx.atPoint( eval_point )
calculation and vice-versa
for kp in key_points.underlying():
print point_evaluation.magnitude( kp )
main()
Print the value of the k x 0 components for the
evaluation point
Figure 4: Python code snippet showing inputs and usage of an MLS implementation
6
Second, there are certain magnitudes (matrix p(κi )pT (κi ), see section 3) which does
not depend on the evaluation point x0 , but only on the key points. There are two options
for evaluating these magnitudes: either evaluating them before, or on demand. The rst
choice is all right if the key points are known in advance (most of the time), while the
second one is best tted for when they are not.
The bulk of operations of the MLS calculation, including matrix inversion and multipli-
cation is performed at the evaluation point itself. There is an inverted matrix multiplica-
tion, which can be somewhat optimized using LU decomposition and back substitution.
Another logistic detail which should be considered when implementing MLS is the
number of key points exerting inuence at each point where the value of the interpolant
is seek. MLS is constructed using a polynomial basis of order m, and the number of
minimum key points necessary increases as m increases; see the last three subgures of
gure 2.
Figure 4 contains the probable steps that a client of some MLS implementation has to
follow, in the form of a small Python snippet. I should note that this snippet is part of
a slightly bigger le which actually works, see section
m
X
φ(x) = pi (x)ai (x) = pT (x)a(x) (3)
i
where m is the number of terms of monomials (monomial basis), and a(x) is the vector
of coecients. For determining the functions in a, a number of partially pinned functions
in the form
φ(x, xi ) = pT (xi )a(x) (4)
is built, where the xj , j = 1, 2 . . . n are the nodes in the support domain of x. Then a
weighted residual functional of the form
n
X 2
J= W (x, xi ) uh (x, xi ) − ui (5)
i=1
2
is minimized . Here, the W weight function should satisfy these conditions:
• Be zero outside it's support domain Ω, this is usually called being compact.
2
Observe that components of the summation dening J are a way of saying u(x, xi ) should be near to
ui more if x is near to xi
7
• Be a monotonically decreasing function.
The weigh function is not responsible for the order of consistency in the resulting
shape function, but aects compatibility. The term moving in the designation of these
shape functions comes from the fact of W being a local clipping function for a nodal
subdomain, usually a function depending on |x − xi |; however, this have not to be always
the case.
The minimum of J can be found recurring to the variational principle of
δJ
=0 (6)
δaj
with j = 1, 2, . . . , m. Let's do the variations. First
δ 2
W (x, xi ) uh (x, xi ) − ui =
δaj
δ h 2
= W (x, xi ) u (x, xi ) − ui =
δaj
= 2W (x, xi ) uh (x, xi ) − ui pj (xi )
n
δJ X
= W (x, xi ) uh (x, xi ) − ui pj (xi ) = 0
δaj
i=1
n
X m
X n
X
W (x, xi )pj (xi ) pk (xi )ak (x) − ui W (x, xi )pj (xi ) = 0
i=1 k=1 i=1
As a last step, sumatory signs can be commuted and the independent term can be written
as a right member:
m
X n
X n
X
ak (x) W (x, xi )pj (xi )pk (xi ) = ui W (x, xi )pj (xi ) (7)
k=1 i=1 i=1
8
Figure 5: Diagram with dimensions of the involved matrices and vectors.
Now we will obtain a representation for 3 in the form 1. Matrix A(x) can be represented
using
n
X
A(x) = W (x, xi )p(xi )pT (xi )
i=1
Note that in the previous expression the p(xi )pT (xi ) is a matrix for each i. Right
hand term 8 can be written as B(x)US where
u1
u2
Us = .
.
.
un
and
W (x, x1 )p1 (x1 ) W (x, x2 )p1 (x2 ) . . . W (x, xn )p1 (xn )
W (x, x1 )p2 (x1 ) W (x, x2 )p2 (x2 ) . . . W (x, xn )p2 (xn )
B(x) =
. . .. .
. . .
. . . .
W (x, x1 )pm (x1 ) W (x, x2 )pm (x2 ) . . . W (x, xn )pm (xn )
Putting these expressions in 7 we obtain:
A(x)a(x) = B(x)US
9
from where
a(x) = A−1 (x)B(x)US
and using 3 we nally get
3
which is the form 1 . If for some i W (x, xi ) is zero then the corresponding row in B(x)US
is zero too, so for points x outside the support of W (x, xi ) there is no contribution of
node i. So, it can be said that functions ψi (x) have compact support.
3.2 Derivatives
Obtaining the derivatives of the functions ψi (x) for the MLS approximant is a bit tricky.
Let's start by writing the vector of all the functions, φ(x):
γ T (x)A(x) = pT (x)
In both previous equations the interesting term (the one with higher order derivatives
for γ) appears underlined.
Now it's time to use expressions 11 for getting the derivatives of φ(x):
3
Just note that scalar multiplication of the two vectors φ(x) and US expands to the sum of the product
of their components
10
4 Conclusions
This little work has came after various years working with MLS approximants. Just now
I have had time to approach it this way, after recognizing that many runtime problems
in meshless code are due to incorrect use of this approximant.
You can access all the code used to write this document at http://bitbucket.org/dignor_sign/neopar/.
The specic snippet of Python code (Jython indeed) can be found at http://bitbucket.org/dignor_sign/neopa
assets/test_scripts/test_mls2_python.py.
References
[1] Marc Duot and Hung Nguyen-Dang. A truly meshless galerkin method based on a
moving least squares quadrature. 18:441449, 2002.
[2] Boris Mederos Luiz, Luiz Velho, Luiz Henrique, and De Figueiredo. Moving least
squares multiresolution surface approximation.
[3] David Levin. The approximation power of moving least-squares. Math. Comp, (67),
1998.
[4] Scott Schaefer, Travis Mcphail, and Joe Warren. Image deformation using moving
least squares. pages 533540, 2006.
[5] Thomas Peter Fries and Hermann Georg Mathies. Classication and overview of
meshfree methods. Informatikbericht, 2003(3), 2003.
11