S. Plunder, S. Merino-Aceituno, Convergence proof for first-order position-based dynamics: An efficient scheme for inequality constrained ODEs.(2023)arxiv.
Fast and simple simulations with contact forces
Position-based dynamics (PBD) can be used to simulate first-order differential equations with inequality constraints. I aim to show mathematically that the method converges.
Before we go into the mathematics behind PBD, let's have a look how such simulations can look like.
Note: You can add more disks by clicking inside the rectangle.
The simulation above uses a very simple numerical method. In fact, Matthias Müller
challenges everyone to find a method that beats PBD in terms of
accuracy
speed,
simplicity or
stability.
For my applications in the simulation of epithelial tissue and cell migration, all these properties were definitely visible and provided a huge advantage; especially the speed of PBD allowed us to simulate up to 100× faster compared to multiple other methods!
How does it work?
There are many similar ways to mix inequality constraints and differential equations. In this brief description, I will use the velocity projection approach.
Let x=(x1,…,xN)∈R2N denote the position of all N disks with radius R>0. If we denote the forces as f(x) then the typical model would be a differential equation
x˙=f(x).
However, if we impose that the disks do not overlap, we need to add extra conditions.
One way to integrate those is to define a set of feasible configurations, first for individual non-overlap
Sij={x∈R2N∣∥xi−xj∥≥2R}
and then combined
S=ij⋂Sij.
With the set of feasible states S, the resulting projected dynamics are
x˙=PT(S,x)(f(x))
where PT(S,x) denotes the projection onto the tangent cone T(S,x) of S at x.
[1]
The PBD method for this first-order problem is very simple. For a numerical time-step h>0 and inital condition x0∈S, the PBD method reads
The remarkable feature of PBD is that it only performs a few very fast projections, one for each (active) constraint. This is much less than comparable methods use!
Because of this minimalistic approach, it will most likely happen that the constraints are not always satisfied, i.e. that sometimes xk∈/S. However, these small infeasibility issues reduce with a smaller time-step. The time saved thanks to fast projections can be well invested by taking smaller time-steps! And taking smaller time-steps has many advantages and improves both the stability and physical accuracy of the method! For this reason, the authors called the corresponding publication "Small Steps in Physics Simulation".
Visualisation of the intermediate projections
The following video visualises how PBD works.
It shows not only the time-steps x0,x1,... but also the intermediate
steps of PBD, i.e. xk+hf(xk) (Euler step), PS12(xk+hf(xk)) (Fix constraint (1,2)), PS13(PS12(xk+hf(xk))) (Fix constraint (1,3)), ...
To make the effect visible, the time-step size h is very large in the second video. Usually the overlap is visually neglectable.
Our new convergence proof
Up to my knowledge, it is not clear if PBD converges in a mathematically rigorous sense! Many numerical
experiments show that the method is very capable. However, it is often also used in contexts where the
method only gives visually pleasing but inaccurate results.
In the case of overdamped dynamics, we could show that for sufficiently small time-steps h the numerical approximation of PBD xh(t) converges towards the exact solution x(t), i.e.
Dealing with the theory of non-smooth dynamics is technically involved, as it requires
generalisations from non-convex analysis such as proximal normal cones and subdifferential calculus.
The two main incredients of the convergence are, to show that
the feasible sets Sij are uniformly prox-regular and that projection onto the intersection
S=⋂ijSij is locally a contraction.
Showing the contraction property is the main challenge, but if one assumes that the
intersection is metrically calm, that is, that there exists a constant γ>0 such that locally
dist(x,S)≤γ1≤i<j≤N∑dist(x,Sij),
then one can show the contraction property which is the key step to numerical stability.
For numerical consistency, we introduce a new notation of so-called scalarly upper semicontinuous consistency.
The idea here is that the tangent cone T(S,x) is not defined for x∈/S and numerical approximations might
be indeed outside of the set S. To compensate for this, we need the right generalisation of numerical consistency.
However, the notion of convergence for h→0 cannot be very strong, since otherwise it would imply that the
solution is differentiable, but this cannot be since contact forces always imply that the solution will have discontinuous velocities
and points where the trajectory is not differentiable. Using the notion of scalarly upper semicontinuity turns out to be just
strong enough to show convergence, but weak enough to be applicable.
Together, these two ideas allow us to show that the numerical approximation xh(t) converges towards the exact solution x(t).
For more details and proofs, we refer to our preprint.
As the set S happens to be uniformly proxy-regular, the differnt notions
for tangent cones coincided. See for example Wikipedia: Clarke tangent cone for a definition. ↩︎