Tweedie distributions are a very general family of distributions that includes the Gaussian, Poisson, and Gamma (among many others) as special cases.
Exponential dispersion models (EDMs) have the following form:
\[f(x; \mu, \sigma^2) = h(\sigma^2, x) \exp\left(\frac{\theta x - A(\theta)}{\sigma^2}\right)\]where $h(\sigma^2, x)$ is the “base distribution”, $\theta(\mu, \sigma^2)$ is a combination/function of the parameters, and $A(\theta)$ is the normalization quantity.
Notice that if we treat $\sigma^2$ as constant, this reduces to the natural exponential family, which has the form \(f(x; \mu) = h(x) \exp\left(\widetilde{\theta} x - A(\widetilde{\theta})\right).\) In this way, we can view EDMs as a generalization of the exponential family that allow for varying dispersion (hence the name).
The mean and variance of an EDM-distributed random variable $X$ are \begin{align} \mathbb{E}[X] &= \mu = A^\prime(\theta) \\ \mathbb{V}[X] &= \sigma^2 A^{\prime \prime}(\theta) = \sigma^2 V(\mu) \end{align} where $V(\mu)$ is called the “variance function”.
Tweedie families are a special case of the EDMs discussed above. Specifically, Tweedie distributions make an assumption about the relationship between the mean and the variance of the distribution. To specify a Tweedie distribution, another parameter $p \in \mathbb{R}$ is introduced, and we restrict the variance as: \(\mathbb{V}[X] = \sigma^2 V(\mu) = \sigma^2 \mu^p.\)
If $X$ is Tweedie-distributed with power parameter $p$, we write \(X \sim \text{Tw}_p(\mu, \sigma^2).\)
Given a value of $p$, writing down the pdf requires finding the proper base measure $h(\mu, \sigma^2)$ such that the density normalizes to 1 properly. In general, this is difficult for the Tweedie family, but we show a few special cases below.
Let $X \sim \text{Tw}_{p}(\mu, \sigma^2)$ where $p=0$. Then the variance is given by $\mathbb{V}[X] = \sigma^2$. Let $\theta = \frac12 \mu$, $A(\theta) = \frac12 \theta^2 = \frac12 \mu^2$ \(h(x; \mu, \sigma^2) = \frac{\exp(-\frac{1}{2\sigma^2} x^2)}{\sqrt{2 \pi \sigma^2}}.\) Notice that $A^{\prime\prime}(\theta) = 1$, which implies that \(\mathbb{V}[X] = \sigma^2 A^{\prime\prime}(\theta) = \sigma^2 \cdot 1 = \sigma^2 \cdot 1^0\) so we have satisifed the mean-variance relationship.
Then we have \begin{align} f(x; \mu, \sigma^2) &= \frac{\exp(-\frac{1}{2\sigma^2} x^2)}{\sqrt{2 \pi \sigma^2}} \exp\left( \frac{\frac12 \mu x - \frac12 \mu^2}{\sigma^2} \right) \\ &= \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left( \frac{-\frac{1}{2\sigma^2} x^2 + \frac12 \mu x - \frac12 \mu^2}{\sigma^2} \right) \\ &= \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left( -\frac{1}{2\sigma^2} (x^2 - \mu x + \mu^2) \right) \\ &= \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left( -\frac{1}{2\sigma^2} (x - \mu)^2 \right) \\ \end{align} which is the density for a Gaussian random variable with mean $\mu$ and variance $\sigma^2$.
Let $X \sim \text{Tw}_{p}(\mu, \sigma^2)$ where $p=1$ and we set $\sigma^2 = 1$ to be a constant. Then the variance is given by $\mathbb{V}[X] = \sigma^2 \mu^1 = \mu$, implying that the mean and variance are equal.
Since $\sigma^2 = 1$, the density’s general form reduces to \(f(x; \mu) = h(x) \exp\left(\theta x - A(\theta)\right).\)
Let $\theta = \log \mu$, $h(x) = \frac{1}{x!}$, $A(\theta) = e^\theta$. Then we have \begin{align} f(x; \mu) &= \frac{1}{x!} \exp\left(x \log \mu - e^\theta) \right) \\ &= \frac{1}{x!} \exp(\log \mu^x)) \exp(e^{-\log \mu}) \\ &= \frac{1}{x!} \mu^x e^{-\mu} \end{align} which is the density of a Poisson-distributed random variable with rate parameter $\mu$.
Notice that \(\mathbb{V}[X] = \sigma^2 A^{\prime\prime}(\theta) = \sigma^2 e^\theta = \sigma^2 e^{\log \mu} = \sigma^2 \mu = \sigma^2 \cdot \mu^1 = \sigma^2 \cdot \mu^p\) so we have satisfied the mean-variance relationship (in the case of the Poisson, they’re identical).
Let $X \sim \text{Tw}_{p}(\mu, \sigma^2)$ where $p=2$. The variance of $X$ is then $\mathbb{V}[X] = \sigma^2 \mu^2$.
Let $\theta = -\mu \sigma^2$, $A(\theta) = -\log (-\theta)$, and $h(x; \mu, \sigma^2) = \frac{x^{1/\sigma^2 - 1}}{\Gamma(1/\sigma^2) (\sigma^2)^{1/\sigma^2}}$. Then the density is \begin{align} f(x; \mu, \sigma^2) &= h(\sigma^2, x) \exp\left(\frac{\theta x - A(\theta)}{\sigma^2}\right) \\ &= \frac{x^{1/\sigma^2 - 1}}{\Gamma(1/\sigma^2) (\sigma^2)^{1/\sigma^2}} \exp\left( \frac{-\mu \sigma^2 x + \log(-\theta)}{\sigma^2} \right) \\ &= \frac{x^{1/\sigma^2 - 1}}{\Gamma(1/\sigma^2) (\sigma^2)^{1/\sigma^2}} \exp\left( -\mu x + \frac{1}{\sigma^2} \log(\mu \sigma^2) \right) \\ &= \frac{x^{1/\sigma^2 - 1}}{\Gamma(1/\sigma^2) (\sigma^2)^{1/\sigma^2}} e^{-\mu x}\exp\left( \log(\mu^{1/\sigma^2} + \log(\sigma^2)^{1/\sigma^2} \right) \\ &= \frac{x^{1/\sigma^2 - 1}}{\Gamma(1/\sigma^2) (\sigma^2)^{1/\sigma^2}} e^{-\mu x} \mu^{1/\sigma^2} (\sigma^2)^{1/\sigma^2} \\ &= \frac{x^{1/\sigma^2 - 1} }{\Gamma(1/\sigma^2)} e^{-\mu x} \mu^{1/\sigma^2} \\ \end{align} which is the density of a Gamma-distributed random variable with shape $1/\sigma^2$ and rate $\mu$.