Notes on random systems 2

What is a random fixed point ?

A reference: Ochs – Oseledets: Examples of  RDS on a closed unit ball without random fixed points. (So the topological fixed point theorem is NOT valid for RDS)

Definition. A random fixed point of a RDS \Phi over noise space (\Omega, \theta) (theta is the dynamics in the noise space) on a space X is a measurable map x: \Omega -> X such that
\Phi (t, \omega) x(\omega) = x (\theta_t \omega) almost surely
for every t \in T.

By push-forward, a random fixed point for RDS in the sense of Arnold becomes a distribution on X which is invariant under the system on X.

Does the topology of X really matter in our case (RDS defined as transformations on the space of measures on X) ? Not much, because P(X) is the the same up to isomorphisms for a lot of different X ? But maby a version of Gelfand-Naimark holds for some restricted classes of distributions ?

Anyway, it seems that any RDS in our sense admits a fixed point ? (Equilibrium mixed state). The proof is probably similar to ergodic theory: any system admits an invariant measure. (The set of invariant measures is convex non-empty). BUT, apparently, in the example of Ochs-Oseledets there doesn’t exist an invariant measure on X!

Assume that there is an “isolated” random fixed point. how does it look like ? What’s the normal form near it ?

Stability of random fixed points ?

p – random fixed point (i.e. invariant measure on X wrt the RDS)

For any mu near p, phi_n(mu) tends to p (in weak topology ?) when n tends to infinity ?

Have to change the definition, because, for example, look at a deterministic system with 2 different stable fixed points p1 and p2. Take a measure of the type (1- \epsilon) p1 + \epsilon p1, then it’s close to p1, but phi_n doesn’t make it any closer to p1. Maybe it will be better with Lyapunov stability ? (For any given neighborhood U of p there is another neighborhood V such that If mu is in V then phi_n(mu) is in U for any n) ? Looks reasonable.