Short project description:

First-order position-based dynamics

This project is a collaboration with Sara Merino-Aceituno.

Preprint:

  • S. Plunder, S. Merino-Aceituno, Convergence proof for first-order position-based dynamics: An efficient scheme for inequality constrained ODEs. (2023) arxiv.

Fast and simple simulations with contact forces

Position-based dynamics (PBD) can be used to simulate first-order differential equations with inequality constraints. I aim to show mathematically that the method converges.

Before we go into the mathematics behind PBD, let's have a look how such simulations can look like.

Note: You can add more disks by clicking inside the rectangle.

I used this numerical method for both of my modelling projects, for example in the cell migration model for plithotaxis.

The magic of position-based dynamics (PBD)

The simulation above uses a very simple numerical method. In fact, Matthias Müller challenges everyone to find a method that beats PBD in terms of

  • accuracy
  • speed,
  • simplicity or
  • stability.

For my applications in the simulation of epithelial tissue and cell migration, all these properties were definitely visible and provided a huge advantage; especially the speed of PBD allowed us to simulate up to 100×100\times faster compared to multiple other methods!

How does it work?

There are many similar ways to mix inequality constraints and differential equations. In this brief description, I will use the velocity projection approach.

Let x=(x1,,xN)R2Nx = (x_1,\dots,x_N) \in \Reals^{2N} denote the position of all NN disks with radius R>0R > 0. If we denote the forces as f(x)f(x) then the typical model would be a differential equation

x˙=f(x). \dot x = f(x).
However, if we impose that the disks do not overlap, we need to add extra conditions. One way to integrate those is to define a set of feasible configurations, first for individual non-overlap
Sij={xR2Nxixj2R} S_{ij} = \{ x \in \Reals^{2N} \mid \Vert x_i - x_j \Vert \geq 2R \}
and then combined
S=ijSij. S = \bigcap_{ij} S_{ij}.
With the set of feasible states SS, the resulting projected dynamics are
x˙=PT(S,x)(f(x)) \dot x = P_{T(S,x)}( \, f(x) \, )
where PT(S,x)P_{T(S,x)} denotes the projection onto the tangent cone T(S,x)T(S,x) of SS at xx. [1]

The PBD method for this first-order problem is very simple. For a numerical time-step h>0h > 0 and inital condition x0Sx_0 \in S, the PBD method reads

xk+1=PS12PS13PSN1N(xk+hf(xk))for k0. x_{k+1} = P_{S_{12}} \circ P_{S_{13}} \circ\cdots \circ P_{S_{N-1 N}} ( x_{k} + h f(x_{k}) ) \quad \text{for } k \geq 0.

The remarkable feature of PBD is that it only performs a few very fast projections, one for each (active) constraint. This is much less than comparable methods use! Because of this minimalistic approach, it will most likely happen that the constraints are not always satisfied, i.e. that sometimes xkSx_k \notin S. However, these small infeasibility issues reduce with a smaller time-step. The time saved thanks to fast projections can be well invested by taking smaller time-steps! And taking smaller time-steps has many advantages and improves both the stability and physical accuracy of the method! For this reason, the authors called the corresponding publication "Small Steps in Physics Simulation".

Visualisation of the intermediate projections

The following video visualises how PBD works. It shows not only the time-steps x0,x1,...x_0, x_1, ... but also the intermediate steps of PBD, i.e. xk+hf(xk)x_k + h f(x_k) (Euler step), PS12(xk+hf(xk))P_{S_{1 2}}(x_k + h f(x_k)) (Fix constraint (1,2)), PS13(PS12(xk+hf(xk)))P_{S_{1 3}}( P_{S_{1 2}}(x_k + h f(x_k))) (Fix constraint (1,3)), ...

To make the effect visible, the time-step size hh is very large in the second video. Usually the overlap is visually neglectable.

Our new convergence proof

Up to my knowledge, it is not clear if PBD converges in a mathematically rigorous sense! Many numerical experiments show that the method is very capable. However, it is often also used in contexts where the method only gives visually pleasing but inaccurate results.

In the case of overdamped dynamics, we could show that for sufficiently small time-steps hh the numerical approximation of PBD xh(t)x_h(t) converges towards the exact solution x(t)x(t), i.e.

supt[0,T]xh(t)x(t)0for h0. \sup_{t \in [0,T]} \Vert x_h(t) - x(t) \Vert \to 0 \quad \text{for } h \to 0.

We refer to our preprint for details.

Key ideas of the convergence proof

Dealing with the theory of non-smooth dynamics is technically involved, as it requires generalisations from non-convex analysis such as proximal normal cones and subdifferential calculus.

The two main incredients of the convergence are, to show that the feasible sets SijS_{ij} are uniformly prox-regular and that projection onto the intersection S=ijSijS = \bigcap_{ij} S_{ij} is locally a contraction.

Showing the contraction property is the main challenge, but if one assumes that the intersection is metrically calm, that is, that there exists a constant γ>0\gamma > 0 such that locally

dist(x,S)γ1i<jNdist(x,Sij), \mathrm{dist}(x, S) \leq \gamma \sum_{1 \leq i < j \leq N} \mathrm{dist}(x, S_{ij}),
then one can show the contraction property which is the key step to numerical stability.

For numerical consistency, we introduce a new notation of so-called scalarly upper semicontinuous consistency. The idea here is that the tangent cone T(S,x)T(S,x) is not defined for xSx \notin S and numerical approximations might be indeed outside of the set SS. To compensate for this, we need the right generalisation of numerical consistency. However, the notion of convergence for h0h \to 0 cannot be very strong, since otherwise it would imply that the solution is differentiable, but this cannot be since contact forces always imply that the solution will have discontinuous velocities and points where the trajectory is not differentiable. Using the notion of scalarly upper semicontinuity turns out to be just strong enough to show convergence, but weak enough to be applicable.

Together, these two ideas allow us to show that the numerical approximation xh(t)x_h(t) converges towards the exact solution x(t)x(t).

For more details and proofs, we refer to our preprint.


  1. As the set SS happens to be uniformly proxy-regular, the differnt notions for tangent cones coincided. See for example Wikipedia: Clarke tangent cone for a definition. ↩︎