Monday, November 4, 2024

Stationary Dilations

1Stationary Dilations

Definition 1. Let \((\Omega, \mathcal{F}, P)\) and \((\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{P})\) be probability spaces. We say that \((\Omega, \mathcal{F}, P)\) is a factor of \((\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{P})\) if there exists a measurable surjective map \(\phi : \tilde{\Omega} \to \Omega\) such that:

  1. For all \(A \in \mathcal{F}\), \(\phi^{- 1} (A) \in \tilde{\mathcal{F}}\)

  2. For all \(A \in \mathcal{F}\), \(P (A) = \tilde{P} (\phi^{- 1} (A))\)

In other words, \((\Omega, \mathcal{F}, P)\) can be obtained from \((\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{P})\) by projecting the larger space onto the smaller one while preserving the probability measure structure.

Remark 2. In the context of stationary dilations, this means that the original nonstationary process \(\{X_t \}\) can be recovered from the stationary dilation \(\{Y_t \}\) through a measurable projection that preserves the probabilistic structure of the original process.

Definition 3. (Stationary Dilation) Let \((\Omega, \mathcal{F}, P)\) be a probability space and let \(\{X_t \}_{t \in \mathbb{R}_+}\) be a nonstationary stochastic process. A stationary dilation of \(\{X_t \}\) is a stationary process \(\{Y_t \}_{t \in \mathbb{R}_+}\) defined on a larger probability space \((\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{P})\) such that:

  1. \((\Omega, \mathcal{F}, P)\) is a factor of \((\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{P})\)

  2. There exists a measurable projection operator \(\Pi\) such that:

    \(\displaystyle X_t = \Pi Y_t \quad \forall t \in \mathbb{R}_+\)

Theorem 4. (Representation of Nonstationary Processes) For a continuous-time nonstationary process \(\{X_t \}_{t \in \mathbb{R}_+}\), its stationary dilation exists which has sample paths \(t \mapsto X_t (\omega)\) which are continuous with probability one when \(X_t\):

  • is uniformly continuous in probability over compact intervals:

    \(\displaystyle \lim_{s \to t} P (|X_s - X_t | > \epsilon) = 0 \quad \forall \epsilon > 0, t \in [0, T], T > 0\)
  • has finite second moments:

    \(\displaystyle \mathbb{E} [|X_t |^2] < \infty \quad \forall t \in \mathbb{R}_+\)
  • has an integral representation of the form:

    \(\displaystyle X_t = \int_0^t \eta (s) ds\)

    where \(\eta (t)\) is a measurable random function that is stationary in the wide sense (with \(\int_0^t \mathbb{E} [| \eta (s) |^2] \hspace{0.17em} ds < \infty\) for all \(t\))

  • and has a covariance operator

    \(\displaystyle R (t, s) =\mathbb{E} [X_t X_s]\)

    which is symmetric \((R (t, s) = R (s, t))\), positive definite and continuous

Under these conditions, there exists a representation:

\(\displaystyle X_t = M (t) \cdot S_t\)

where:

  • \(M (t)\) is a continuous deterministic modulation function

  • \(\{S_t \}_{t \in \mathbb{R}_+}\) is a stationary process

This representation can be obtained through the stationary dilation by choosing:

\(\displaystyle Y_t = \left( \begin{array}{c} M (t)\\ S_t \end{array} \right)\)

with the projection operator \(\Pi\) defined as:

\(\displaystyle \Pi Y_t = M (t) \cdot S_t\)

Proposition 5. (Properties of Dilation) The stationary dilation satisfies:

  1. Preservation of moments:

    \(\displaystyle \mathbb{E} [|X_t |^p] \leq \mathbb{E} [|Y_t |^p] \quad \forall p \geq 1\)
  2. Minimal extension: Among all stationary processes that dilate \(X_t\), there exists a minimal one (unique up to isomorphism) in terms of the probability space dimension

Corollary 6. For any nonstationary process satisfying the above conditions, the stationary dilation provides a canonical factorization into deterministic time-varying components and stationary stochastic components.

Monday, October 28, 2024

Treehouse of Horror: The LaTeX Massacre

## "Treehouse of Horror: The LaTeX Massacre"

**Segment 1: The Formatting**
Homer works as a LaTeX typesetter at the nuclear plant. After Mr. Burns demands perfectly aligned equations, Homer goes insane trying to format complex mathematical expressions, eventually snapping when his equations run off the page[1]. In a parody of "The Shinning," Homer chases his family around with a mechanical keyboard while screaming "All work and no proper alignment makes Homer go crazy!"[1]

**Segment 2: Time and Compilation**
In a nod to "Time and Punishment"[1], Homer accidentally breaks his LaTeX compiler and tries to fix it, but ends up creating a time paradox where every document compiles differently in parallel universes. He desperately tries to find his way back to a reality where his equations render properly.

**Segment 3: The Cursed Code**
Bart discovers an ancient LaTeX document that contains forbidden mathematics. When he compiles it, it summons an eldrich horror made entirely of misaligned integrals and malformed matrices. Lisa must save Springfield by finding the one perfect alignment that will banish the mathematical monster back to its dimension[2].

The episode ends with a meta-joke about how even the credits won't compile properly[4].

Citations:
[1] There's Only 1 Treehouse Of Horror Simpsons Episode You Need ... https://screenrant.com/simpsons-treehouse-horror-v-best-halloween-episode-rewatch/
[2] The Simpsons Halloween Episodes: Every 'Treehouse of Horror ... https://www.ign.com/articles/best-simspsons-halloween-episodes
[3] "The Simpsons" Halloween of Horror (TV Episode 2015) - IMDb https://www.imdb.com/title/tt4480454/
[4] List of The Simpsons Treehouse of Horror episodes - Wikipedia https://en.wikipedia.org/wiki/List_of_The_Simpsons_Treehouse_of_Horror_episodes
[5] The 10 best Treehouse of Horror episodes of The Simpsons https://www.fudgeanimation.com/journal/10-best-treehouse-horror-episodes-simpsons

Friday, October 25, 2024

A Modest Proposal: Statistical Token Prediction Is No Replacement for Syntactic Construction

A Modest Proposal: Statistical Token Prediction Is No Replacement for Syntactic Construction

by Stephen Crowley

October 25, 2024

1Current Generative-Pretrained-Transformer Architecture

Given vocabulary \(V\), \(|V| = v\), current models map token sequences to vectors:

\(\displaystyle (t_1, \ldots, t_n) \mapsto X \in \mathbb{R}^{n \times d}\)

Through layers of transformations:

\(\displaystyle \text{softmax} (QK^T / \sqrt{d}) V\)

where \(Q = XW_Q\), \(K = XW_K\), \(V = XW_V\)

Optimizing:

\(\displaystyle \max_{\theta} \sum \log P (t_{n + 1} |t_1, \ldots, t_n ; \theta)\)

2Required Reformulation

Instead, construct Abstract Syntax Trees where each node \(\eta\) must satisfy:

\(\displaystyle \eta \in \{ \text{Noun}, \text{Verb}, \text{Adjective}, \text{Conjunction}, \ldots\}\)

With composition rules \(R\) such that for nodes \(\eta_1, \eta_2\):

\(\displaystyle R (\eta_1, \eta_2) = \left\{ \begin{array}{ll} \text{valid\_subtree} & \text{if grammatically valid}\\ \emptyset & \text{otherwise} \end{array} \right.\)

And logical constraints \(L\) such that for any subtree \(T\):

\(\displaystyle L (T) = \left\{ \begin{array}{ll} T & \text{if logically consistent}\\ \emptyset & \text{if contradictory} \end{array} \right.\)

3Parsing and Generation

Input text \(s\) maps to valid AST \(T\) or error \(E\):

\(\displaystyle \text{parse} (s) = \left\{ \begin{array}{ll} T & \text{if } \exists \text{valid AST}\\ E (\text{closest\_valid}, \text{violation}) & \text{otherwise} \end{array} \right.\)

Generation must traverse only valid AST constructions:

\(\displaystyle \text{generate} (c) = \{T|R (T) \neq \emptyset \wedge L (T) \neq \emptyset\}\)

where \(c\) is the context/prompt.

4Why Current GPT Fails

The statistical model:

\(\displaystyle \text{softmax} (QK^T / \sqrt{d}) V\)

Has no inherent conception of:

  • Syntactic validity

  • Logical consistency

  • Conceptual preservation

It merely maximizes:

\(\displaystyle P (t_{n + 1} |t_1, \ldots, t_n)\)

Based on training patterns, with no guaranteed constraints on:

\(\displaystyle \prod_{i = 1}^n P (t_i |t_1, \ldots, t_{i - 1})\)

This allows generation of:

  • Grammatically invalid sequences

  • Logically contradictory statements

  • Conceptually inconsistent responses

5Conclusion

The fundamental flaw is attempting to learn syntax and logic from data rather than building them into the architecture. An AST-based approach with formal grammar rules and logical constraints must replace unconstrained statistical token prediction.

Tuesday, October 22, 2024

Uniformly Convergent Expansions of Positive Definite Functions

Uniformly Convergent Expansions of Positive Definite Functions

by Stephen Crowley <stephencrowley214@gmail.com>

October 22, 2024

Theorem 1. The covariance function \(K (t)\) of a stationary Gaussian process has a uniformly convergent expansion in terms of functions from the orthogonal complement of the null space of the inner product defined by \(K\). This uniform convergence holds initially on the real line and extends to the entire complex plane.

Proof. Let \(\{P_n (\omega)\}_{n = 0}^{\infty}\) be the orthogonal polynomials with respect to the spectral density \(S (\omega)\) of a stationary Gaussian process, and \(\{f_n (t)\}_{n = 0}^{\infty}\) their Fourier transforms defined as:

\(\displaystyle f_n (t) = \int P_n (\omega) e^{i \omega t} d \omega\)

Let \(K (t)\) be the covariance function of the Gaussian process.

1) First, the orthogonality of the polynomials \(P_n (\omega)\) is established:

a) By definition of orthogonal polynomials, for \(m \neq n\):

\(\displaystyle \int P_m (\omega) P_n (\omega) S (\omega) d \omega = 0\)

b) The spectral density and covariance function form a Fourier transform pair:

\(\displaystyle K (t) = \int S (\omega) e^{i \omega t} d \omega\)

2) The null space property of \(\{f_n (t)\}_{n = 1}^{\infty}\) is proven:

a) Consider the inner product \(\langle f_n, K \rangle\) for \(n \geq 1\):

\(\displaystyle \langle f_n, K \rangle = \int f_n (t) K (t) dt = \int f_n (t) \left( \int S (\omega) e^{i \omega t} d \omega \right) dt\)

b) Applying Fubini's theorem:

\(\displaystyle \langle f_n, K \rangle = \int S (\omega) \left( \int f_n (t) e^{i \omega t} dt \right) d \omega = \int S (\omega) P_n (\omega) d \omega = 0\)

Thus, \(\{f_n (t)\}_{n = 1}^{\infty}\) are in the null space of the inner product defined by \(K\).

3) The Gram-Schmidt process is applied to the Fourier transforms \(\{f_n (t)\}_{n = 0}^{\infty}\) to obtain an orthonormal basis \(\{g_n (t)\}_{n = 0}^{\infty}\) for the orthogonal complement of the null space:

\(\displaystyle \tilde{g}_0 (t) = f_0 (t)\)
\(\displaystyle g_0 (t) = \frac{\tilde{g}_0 (t)}{\| \tilde{g}_0 (t)\|}\)

For \(n \geq 1\):

\(\displaystyle \tilde{g}_n (t) = f_n (t) - \sum_{k = 0}^{n - 1} \langle f_n, g_k \rangle g_k (t)\)
\(\displaystyle g_n (t) = \frac{\tilde{g}_n (t)}{\| \tilde{g}_n (t)\|}\)

where \(\| \cdot \|\) and \(\langle \cdot, \cdot \rangle\) denote the norm and inner product induced by \(K\), respectively.

4) \(K (t)\) can be expressed in terms of this basis:

\(\displaystyle K (t) = \sum_{n = 0}^{\infty} \alpha_n g_n (t)\)

where \(\alpha_n = \langle K, g_n \rangle\) are the projections of \(K\) onto \(g_n (t)\).

5) The partial sum is defined as:

\(\displaystyle S_N (t) = \sum_{n = 0}^N \alpha_n g_n (t)\)

6) The sequence of partial sums \(S_N (t)\) converges uniformly to \(K (t)\) in the canonical metric induced by the kernel as \(N \to \infty\).

7) To realize this, recall that the canonical metric is defined as:

\(\displaystyle d (f, g) = \sqrt{\int \int (f (t) - g (t)) (f (s) - g (s)) K (t - s) dtds}\)

8) The error in this metric is considered:

\(\displaystyle d (K, S_N)^2 = \int \int (K (t) - S_N (t)) (K (s) - S_N (s)) K (t - s) dtds\)

9) As the kernel operator is compact in this metric:

For every positive epsilon, there exists an N (which depends on epsilon) less than n, such that the distance between K and Sn is less than epsilon.

\(\displaystyle \exists N (\epsilon) < n : d (K, S_n) < \epsilon \quad \forall \epsilon > 0\)

10) Extension to the Complex Plane:

a) The covariance function \(K (t)\) of a stationary Gaussian process is positive definite and therefore analytic in the complex plane.

b) The partial sum \(S_N (t)\) is a finite sum of analytic functions (as \(g_n (t)\) are analytic), and is thus analytic in the complex plane.

c) The convergence of \(S_N (t)\) to \(K (t)\) on the real line is uniform, as shown in steps 1-9.

d) Consider any open disk D in the complex plane that intersects the real line. The intersection of D with the real line contains an accumulation point.

e) By the Identity Theorem for analytic functions, since \(K (t)\) and \(S_N (t)\) agree on a set with an accumulation point within D (namely, the intersection of D with the real line), they must agree on the entire disk D.

f) As this holds for any disk intersecting the real line, and such disks cover the entire complex plane, the uniform convergence of \(S_N (t)\) to \(K (t)\) extends to the entire complex plane.

Thus, it has been shown that the covariance function \(K (t)\) has a uniformly convergent expansion in terms of functions from the orthogonal complement of the null space of the inner product defined by \(K\). This uniform convergence holds initially on the real line and extends to the entire complex plane.\(\Box\)

Tuesday, October 8, 2024

Accomodation Ascension

In a convergence of accommodation and purpose, the journey began—a journey not unlike my own endeavor with the Riemann Hypothesis. With every insight, each approximation revealed a deeper understanding, like discovering the hidden higher-dimensional representations embedded in the seemingly one-dimensional solutions. What if this all ties back to the Hardy Z function and Bessel function J0, drawing a line between the elementary harmonic waves and, incredibly, the proof of the mass gap as described in Alexi Svcestikonov's 'Towards Nonperturbative Quantization of Yang-Mills Fields'? A coherence begins to emerge, a link between seemingly disparate domains—a bridge that feels almost inevitable now.


It's not just the universe's complex beauty that is at play here. It's the convergence of abstract mathematical landscapes into something tangible—a retrodiction, a rigorous Bayesian narrative that may very well give us the integer address of our universe itself. Every zero of the conformally transformed Hardy Z function, incorporating a timelike parameter in a transformation like tanh(log(1+alpha*x^2)), does describe the universe's expansion from zero volume to a maximum bound, as natural and bounded as the hyperbolic tangent's squash. The loci of zeros form intricate shapes like the lemniscate of Bernoulli, and the imaginary loci branch off into hyperbolas—the entire manifold reshapes into a compact origin, where geometry manifests its secrets.

And so, I found myself contemplating the origin, the very heart of coherence, where the phase lines diverge not into infinity but form elegant figure-eight lemniscates. Where asymmetry is born from the underlying warping of this mathematical space, the Z function's surface becomes a landscape of purpose. This is not merely science; it is a stunning composition of verses—a manifestation of something profound, where math becomes poetry and the universe itself becomes an anthem of ataraxia, waiting to be decoded. The synchronic and diachronic facets of the journey spoke in tandem, affirming the intermediate steps as intrinsic to the overarching resolution. In the pursuit of understanding, in the tenuous grasp of knowledge, the intrepid traveler found not only clarity but a resonance—an emblematic, unified ascension.

And so, the journey persisted, forever on the precipice of something profound, beckoning, both beguiling and benevolent—a true manifestation of the Pleroma—a profound, enigmatic totality, where all things become unified and whole.


Friday, August 23, 2024

Harmonizable Stochastic Processes

M.M. Rao, along with other notable researchers, have made significant contributions to the theory of harmonizable processes. Some of the fundamental theorems and results one might find in a comprehensive textbook on this topic are:


1. Loève's Harmonizability Theorem:

A complex-valued stochastic process {X(t), t ∈ R} is harmonizable if and only if its covariance function C(s,t) can be represented as:


C(s,t) = ∫∫ exp(iλs - iμt) dF(λ,μ)


where F is a complex measure of bounded variation on R² (called the spectral measure).


2. Characterization of Harmonizable Processes:

A process X(t) is harmonizable if and only if it admits a representation:


X(t) = ∫ exp(iλt) dZ(λ)


where Z(λ) is a process with orthogonal increments.


3. Cramér's Representation Theorem for Harmonizable Processes:

For any harmonizable process X(t), there exists a unique (up to equivalence) complex-valued orthogonal random measure Z(λ) such that:


X(t) = ∫ exp(iλt) dZ(λ)


4. Karhunen-Loève Theorem for Harmonizable Processes:

A harmonizable process X(t) has the representation:


X(t) = ∑ₖ √λₖ ξₖ φₖ(t)


where λₖ and φₖ(t) are eigenvalues and eigenfunctions of the integral operator associated with the covariance function, and ξₖ are uncorrelated random variables.


5. Rao's Decomposition Theorem:

Any harmonizable process can be uniquely decomposed into the sum of a purely harmonizable process and a process harmonizable in the wide sense.


6. Spectral Representation of Harmonizable Processes:

The spectral density f(λ,μ) of a harmonizable process, when it exists, is related to the spectral measure F by:


dF(λ,μ) = f(λ,μ) dλdμ


7. Continuity and Differentiability Theorem:

A harmonizable process X(t) is mean-square continuous if and only if its spectral measure F is continuous in each variable separately. It is mean-square differentiable if and only if ∫∫ (λ² + μ²) dF(λ,μ) < ∞.


8. Prediction Theory for Harmonizable Processes:

The best linear predictor of a harmonizable process X(t) given its past {X(s), s ≤ t} can be expressed in terms of the spectral measure F.


9. Sampling Theorem for Harmonizable Processes:

If a harmonizable process X(t) has a spectral measure F supported on a bounded set, then X(t) can be reconstructed from its samples at a sufficiently high rate.


10. Rao's Theorem on Equivalent Harmonizable Processes:

Two harmonizable processes are equivalent if and only if their spectral measures are equivalent.


11. Stationarity Conditions:

A harmonizable process is (wide-sense) stationary if and only if its spectral measure is concentrated on the diagonal λ = μ.


12. Gladyshev's Theorem:

A process X(t) is harmonizable if and only if for any finite set of times {t₁, ..., tₙ}, the characteristic function of (X(t₁), ..., X(tₙ)) has a certain specific form involving the spectral measure.


These theorems form the core of the theory of harmonizable processes, providing a rich framework for analyzing a wide class of non-stationary processes. M.M. Rao's contributions, particularly in the areas of decomposition and characterization of harmonizable processes, have been instrumental in developing this field.

 

 


 

Tuesday, August 13, 2024

Inverse Spectral Theory: The essence of Gel'fand-Levitan theory...

The Gelfand-Levitin Theorem establishes a relationship between a function's Fourier transform and the spectral density function of a self-adjoint operator. Specifically, it states that for a self-adjoint operator with a known spectral density function, the Fourier transform of the spectral function can be reconstructed from the kernel of the operator's resolvent. This theorem is particularly useful in quantum mechanics and signal processing for reconstructing potential functions or other operator characteristics from observed data.


Let us explain the essence of Gel'fand–Levitan theory in more detail. Let \( \psi(x, k) \) be as in equations (3.17) and (3.18). Then \( \psi(x, k) \) is an even and entire function of \( k \) in \( \mathbb{C} \) satisfying

$$ \psi(x, k) = \frac{\sin kx}{k} + o\left(\frac{e^{| \text{Im} k | x}}{|k|}\right) \text{ as } |k| \rightarrow \infty. $$

Here we recall the Paley–Wiener theorem. An entire function \( F(z) \) is said to be of exponential type \( \sigma \) if for any \( \epsilon > 0 \), there exists \( C_{\epsilon} > 0 \) such that

$$ |F(z)| \leq C_{\epsilon} e^{(\sigma + \epsilon)|z|}, \quad \forall z \in \mathbb{C}. $$

By virtue of Paley–Wiener theorem and the expression above, \( \psi(x, k) \) has the following representation

$$ \psi(x, k) = \frac{\sin kx}{k} + \int_{0}^{\infty} K(x, y) \frac{\sin ky}{k} \, dy. $$

Inserting this expression into equation (3.17), then \( K \) is shown to satisfy the equation

$$ (\partial^2_y - \partial^2_x + V(x))K(x, y) = 0. $$

The crucial fact is

$$ \frac{d}{dx} K(x, x) = V(x). $$

One can further derive the following equation

$$ K(x, y) + \Omega(x, y) + \int_{0}^{\infty} K(x, t)\Omega(t, y) \, dt = 0, \quad \text{for all } x > y, $$

where \( \Omega(x, y) \) is a function constructed from the S-matrix and information of bound states. This is called the Gel'fand–Levitan equation.

Thus, the scenario of the reconstruction of \( V(x) \) is as follows: From the scattering matrix and the bound states, one constructs \( \Omega(x, y) \). Solving for \( K(x, y) \) gives us \( K \), and the potential \( V(x) \) is obtained by the equation:

$$ V(x) = \frac{d}{dx} K(x, x). $$

What is the hidden mechanism? This is truly an ingenious trick, and it is not easy to find the key fact behind their theory. It was Kay and Moses who studied an algebraic aspect of the Gel'fand–Levitan method.

.. excerpt from
Inverse Spectral Theory: Part I

by Hiroshi Isozaki

Department of Mathematics
Tokyo Metropolitan University
Hachioji, Minami-Osawa 192-0397
Japan
E-mail: isozakih@comp.metro-u.ac.jp

Saturday, August 3, 2024

The Spectral Representation of Stationary Processes: Bridging Gelfand-Vilenkin and Wiener-Khinchin



Introduction

At the heart of stochastic process theory lies a profound connection between time and frequency domains, elegantly captured by two fundamental theorems: the Gelfand-Vilenkin Spectral Representation Theorem and the Wiener-Khinchin Theorem. These results, while often presented separately, are intimately linked, offering complementary insights into the nature of stationary processes.

Gelfand-Vilenkin Theorem

The Gelfand-Vilenkin theorem provides a general, measure-theoretic framework for representing wide-sense stationary processes. Consider a stochastic process $\{X(t) : t \in \mathbb{R}\}$ on a probability space $(\Omega, \mathcal{F}, P)$. The theorem states that we can represent $X(t)$ as:

$$X(t) = \int_{\mathbb{R}} e^{i\omega t} dZ(\omega)$$

Here, $Z(\omega)$ is a complex-valued process with orthogonal increments, and the integral is taken over the real line. This representation expresses the process as a superposition of complex exponentials, each contributing to the overall behavior of $X(t)$ at different frequencies.

The key to understanding this representation lies in the spectral measure $\mu$, which is defined by $E[|Z(A)|^2] = \mu(A)$ for Borel sets $A$. This measure encapsulates the distribution of "energy" across different frequencies in the process.

Wiener-Khinchin Theorem

The Wiener-Khinchin theorem, in its classical form, states that for a wide-sense stationary process, the power spectral density $S(\omega)$ is the Fourier transform of the autocorrelation function:

$$S(\omega) = \int_{\mathbb{R}} R(\tau) e^{-i\omega\tau} d\tau$$

Bridging the Theorems

The connection becomes clear when we recognize that the spectral measure $\mu$ from Gelfand-Vilenkin is related to the power spectral density $S(\omega)$ from Wiener-Khinchin by:

$$d\mu(\omega) = \frac{1}{2\pi} S(\omega) d\omega$$

This relationship holds when $S(\omega)$ exists as a well-defined function. However, the beauty of the Gelfand-Vilenkin approach is that it allows for spectral measures that may not have a density, accommodating processes with more complex spectral structures.

Spectral Density Example

To illustrate the connection between spectral properties and sample path behavior, consider a process with a spectral density of the form:

$$S(\omega) = \frac{1}{\sqrt{1 - \omega^2}}, \quad |\omega| < 1$$

This density has singularities at $\omega = \pm 1$, which profoundly influence the behavior of the process in the time domain:

- The sample paths will be continuous and infinitely differentiable.
- The paths will exhibit rapid oscillations, reflecting the strong presence of frequencies near $\pm 1$.
- The process will show a mix of components with different periods, with those corresponding to $|\omega|$ near 1 having larger amplitudes on average.
- The autocorrelation function is $R(\tau) = J_0(\tau)$, where $J_0$ is the Bessel function of the first kind of order zero.

Frequency Interpretation

In our spectral density $S(\omega) = 1 / \sqrt{1 - \omega^2}$ with $|\omega| < 1$:

- $\omega$ represents angular frequency, with $|\omega|$ closer to 0 corresponding to longer-period components in the process.
- $|\omega|$ closer to 1 corresponds to shorter-period components.
- As $|\omega|$ approaches 1, $S(\omega)$ increases sharply, approaching infinity.
- This means components with $|\omega|$ near 1 contribute more strongly to the process variance.

Dirac Delta Example

Consider a spectral measure that is a Dirac delta function at $\omega = 0.25$:

$$S(\omega) = \delta(\omega - 0.25) + \delta(\omega + 0.25)$$

In this case:

- The process can be written as: $X(t) = A \cos(0.25t) + B \sin(0.25t)$
- The covariance function is $R(\tau) = \cos(0.25\tau)$
- The period of the covariance function is $2\pi/0.25 = 8\pi \approx 25.13$
- This illustrates that a frequency of 0.25 in the spectral domain corresponds to a period of $8\pi$ in the time domain

This example demonstrates the crucial relationship: for any peak or concentration of spectral mass at a frequency $\omega_0$, we'll see corresponding oscillations in the covariance function with period $2\pi/\omega_0$.

Saturday, July 6, 2024

J₀(y)=Joy

This expression captures the idea that the Bessel function of the first kind of order zero, \( J_0(y) \), represents more than just a mathematical function. It symbolizes the joy of discovery, the beauty of mathematical solutions, and the profound satisfaction that comes from understanding the intricate patterns of the universe.

Khinchin's theorem

Summarizing an excerpt about Khinchin's theorem from

Khinchin's theorem is a simple consequence of the following two statements,
taken together:

(a) The class of functions $B (t)$, which are correlation functions of
stationary random processes, coincides with the class of positive definite
functions of the variable $t$ (see above, Sec. 4 for a real case and Sec. 5
for a complex case).

(b) A continuous function $B (t)$ of the real variable $t$ is positive
definite if, and only if, it can be represented in the form (2.52), where $F
(\omega)$ is bounded and nondecreasing (this statement was proved
independently by Bochner and Khinchin, but was first published by Bochner and
therefore is known as Bochner's theorem; see, e.g., Bochner (1959) and also
Note 3 to Introduction).

In the preceding section it was emphasized that Khinchin's theorem lies at the
basis of almost all the proofs of the spectral representation theorem for
stationary random processes. It is, however, obvious that if we proved the
spectral representation theorem without using Khinchin's theorem, this would
also clearly imply the possibility of representing $B (t)$ in the form (2.52).
Indeed, replacing $X (t + \tau)$ and $X (t)$ in the formula $B (t) = \langle X
(t + \tau) X (t) \rangle$ by their spectral representation (2.61) and then
using (2.1) by definition (2.62) of the corresponding Fourier--Stieltjes
integral and the property (b') of the random function $Z (\omega)$, we obtain
at once (2.52), where
\begin{equation}
  F (\omega + \Delta \omega) - F (\omega) = |Z (\omega + \Delta \omega) - Z
  (\omega) |^2
\end{equation}
so that $F (\omega)$ is clearly a nondecreasing function. Formula (2.76) can
also be written in the differential form:
\begin{equation}
  \langle dZ (\omega)^2 \rangle = dF (\omega)
\end{equation}
Moreover, $(2.77)$ can be combined with the property $(b')$ of $Z (\omega)$ in
the form of a single symbolic relation
\begin{equation}
  \langle dZ (\omega) dZ (\omega') \rangle = \delta (\omega - \omega') dF
  (\omega) d \omega'
\end{equation}
where $\delta (\omega)$ is the Dirac delta-function. It is easy to see that
the substitution of $(2.78)$ into the expression for the mean value of any
double integral with respect to $dZ (\omega)$ and $dZ (\omega')$ gives the
correct result. As the simplest example we consider the following derivation
of Khinchin's formula $(2.52)$:
\begin{equation}
  \begin{array}{ll}
    \langle X (t + \tau) X (t) \rangle & = \left\langle \int_{-
    \infty}^{\infty} e^{i \omega (t + \tau)} dZ (\omega) \int_{-
    \infty}^{\infty} e^{- i \omega' t} dZ (\omega') \right\rangle\\
    & = \int_{- \infty}^{\infty} \int_{- \infty}^{\infty} e^{i \omega (t +
    \tau) - i \omega' t}  \langle dZ (\omega) dZ (\omega') \rangle\\
    & = \int_{- \infty}^{\infty} \int_{- \infty}^{\infty} e^{i \omega (t +
    \tau) - i \omega' t} \delta (\omega - \omega') dF (\omega) d \omega'\\
    & = \int_{- \infty}^{\infty} e^{i \omega \tau} dF (\omega)
  \end{array}
\end{equation}
Quite similarly, the following more general result can be derived:
\begin{equation}
  \int_{- \infty}^{\infty} g (\omega) dZ (\omega)  \int_{- \infty}^{\infty} h
  (\omega') dZ (\omega') = \int_{- \infty}^{\infty} g (\omega) h (\omega')
  \delta (\omega - \omega') dF (\omega)
\end{equation}
where $g (\omega)$ and $h (\omega)$ are any two complex functions whose
squared absolute values are integrable with respect to $dF (\omega)$. Note
also that if the spectral density $f (\omega)$ exists, then the relations
$(2.77)$ and $(2.78)$ obviously take the form
\begin{equation}
  \langle dZ (\omega)^2 \rangle = f (\omega) d \omega
\end{equation}
\begin{equation}
  \langle dZ (\omega) dZ (\omega') \rangle = \delta (\omega - \omega') f
  (\omega) d \omega d \omega'
\end{equation}
Formulae $(2.76) (2.78)$ and $(2.80)  (2.81)$
establish the relationship between the spectral representation of the
correlation function (determined by the functions $F (\omega)$ and $f
(\omega)$) and the spectral representation of the stationary random process $X
(t)$ itself, which includes the random point function $Z (\omega)$ or the
random interval function
\begin{equation}
  Z (\Delta \omega) = Z (\omega_2) - Z (\omega_1)
\end{equation}
where $\Delta \omega = [\omega_1, \omega_2]$. We shall see in Sec. 11 that
this relationship gives physical meaning to Khinchin's mathematical theorem
and permits one to verify it experimentally when the stationary process $X
(t)$ is realized in the form of oscillations of some measurable physical
quantity $X$. 

Stationary Dilations

1 Stationary Dilations Definition 1 . Let \((\Omega, \mathcal{F}, P)\) and \((\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{P})\) be pr...