Common
flowMC.resource.model.common
¤
Bijection
¤
Bases: Module
Base class for bijective transformations.
Subclasses must implement :meth:forward and :meth:inverse.
The default :meth:__call__ delegates to :meth:forward.
This is an abstract template that should not be directly used.
__init__()
abstractmethod
¤
__call__(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
¤
Apply the forward transformation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Float[Array, n_dim]
|
Input array. |
required |
condition
|
Float[Array, n_condition]
|
Conditioning variables. |
required |
Returns:
| Type | Description |
|---|---|
tuple[Float[Array, ' n_dim'], Float]
|
tuple[Float[Array, "n_dim"], Float]: Transformed output and log-det Jacobian. |
forward(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
abstractmethod
¤
Transform from input space to output space.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Float[Array, n_dim]
|
Input array. |
required |
condition
|
Float[Array, n_condition]
|
Conditioning variables. |
required |
Returns:
| Type | Description |
|---|---|
tuple[Float[Array, ' n_dim'], Float]
|
tuple[Float[Array, "n_dim"], Float]: Transformed output and log-det Jacobian. |
inverse(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
abstractmethod
¤
Transform from output space back to input space.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Float[Array, n_dim]
|
Array in the output (transformed) space. |
required |
condition
|
Float[Array, n_condition]
|
Conditioning variables. |
required |
Returns:
| Type | Description |
|---|---|
tuple[Float[Array, ' n_dim'], Float]
|
tuple[Float[Array, "n_dim"], Float]: Inverse output and log-det Jacobian. |
Distribution
¤
Bases: Module
Base class for probability distributions.
Subclasses must implement :meth:log_prob and :meth:sample.
The default :meth:__call__ delegates to :meth:log_prob.
This is an abstract template that should not be directly used.
__init__()
abstractmethod
¤
__call__(x: Array, key: Optional[Key] = None) -> Array
¤
Evaluate the log-probability of x.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Array
|
Input sample. |
required |
key
|
Key
|
Unused; reserved for subclass compatibility. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Array |
Array
|
Log-probability of |
log_prob(x: Array) -> Array
abstractmethod
¤
sample(rng_key: Key, n_samples: int) -> Float[Array, 'n_samples n_features']
abstractmethod
¤
MLP
¤
Bases: Module
Multilayer perceptron.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
shape
|
List[int]
|
Shape of the MLP. The first element is the input dimension, the last element is the output dimension. |
required |
key
|
Key
|
Random key. |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
layers |
List
|
List of layers. |
activation |
Callable
|
Activation function. |
use_bias |
bool
|
Whether to use bias. |
MaskedCouplingLayer
¤
Bases: Bijection
Masked coupling layer.
f(x) = (1-m)b(x;c(mx;z)) + m*x where b is the inner bijector, m is the mask, and c is the conditioner.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bijector
|
Bijection
|
inner bijector in the masked coupling layer. |
required |
mask
|
Array
|
Mask. 0 for the input variables that are transformed, 1 for the input variables that are not transformed. |
required |
mask: Float[Array, ' n_dim']
property
¤
bijector: Bijection = bijector
instance-attribute
¤
_mask: Float[Array, ' n_dim'] = mask
instance-attribute
¤
__init__(bijector: Bijection, mask: Float[Array, ' n_dim'])
¤
forward(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
¤
inverse(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
¤
__call__(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
¤
Apply the forward transformation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Float[Array, n_dim]
|
Input array. |
required |
condition
|
Float[Array, n_condition]
|
Conditioning variables. |
required |
Returns:
| Type | Description |
|---|---|
tuple[Float[Array, ' n_dim'], Float]
|
tuple[Float[Array, "n_dim"], Float]: Transformed output and log-det Jacobian. |
MLPAffine
¤
Bases: Bijection
scale_MLP: MLP = scale_MLP
instance-attribute
¤
shift_MLP: MLP = shift_MLP
instance-attribute
¤
dt: Float = dt
class-attribute
instance-attribute
¤
__init__(scale_MLP: MLP, shift_MLP: MLP, dt: Float = 1)
¤
__call__(x: Float[Array, ' n_dim'], condition_x: Float[Array, ' n_cond']) -> Tuple[Float[Array, ' n_dim'], Float]
¤
forward(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
¤
inverse(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
¤
ScalarAffine
¤
Bases: Bijection
scale: Array = jnp.array(scale)
instance-attribute
¤
shift: Array = jnp.array(shift)
instance-attribute
¤
__init__(scale: Float, shift: Float)
¤
__call__(x: Float[Array, ' n_dim'], condition_x: Float[Array, ' n_cond']) -> Tuple[Float[Array, ' n_dim'], Float]
¤
forward(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
¤
inverse(x: Float[Array, ' n_dim'], condition: Float[Array, ' n_condition']) -> tuple[Float[Array, ' n_dim'], Float]
¤
Gaussian
¤
Bases: Distribution
Multivariate Gaussian distribution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mean
|
Array
|
Mean. |
required |
cov
|
Array
|
Covariance matrix. |
required |
learnable
|
bool
|
Whether the mean and covariance matrix are learnable parameters. |
False
|
Attributes:
| Name | Type | Description |
|---|---|---|
mean |
Array
|
Mean. |
cov |
Array
|
Covariance matrix. |
mean: Float[Array, ' n_dim']
property
¤
cov: Float[Array, 'n_dim n_dim']
property
¤
_mean: Float[Array, ' n_dim'] = mean
instance-attribute
¤
_cov: Float[Array, 'n_dim n_dim'] = cov
instance-attribute
¤
learnable: bool = learnable
class-attribute
instance-attribute
¤
__init__(mean: Float[Array, ' n_dim'], cov: Float[Array, 'n_dim n_dim'], learnable: bool = False)
¤
log_prob(x: Float[Array, ' n_dim']) -> Float
¤
sample(rng_key: Key, n_samples: int) -> Float[Array, 'n_samples n_features']
¤
__call__(x: Array, key: Optional[Key] = None) -> Array
¤
Evaluate the log-probability of x.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Array
|
Input sample. |
required |
key
|
Key
|
Unused; reserved for subclass compatibility. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Array |
Array
|
Log-probability of |
Composable
¤
Bases: Distribution
distributions: list[Distribution] = distributions
instance-attribute
¤
partitions: dict[str, tuple[int, int]] = partitions
instance-attribute
¤
__init__(distributions: list[Distribution], partitions: dict)
¤
log_prob(x: Float[Array, ' n_dim']) -> Float
¤
sample(rng_key: Key, n_samples: int) -> Float[Array, 'n_samples n_features']
¤
__call__(x: Array, key: Optional[Key] = None) -> Array
¤
Evaluate the log-probability of x.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Array
|
Input sample. |
required |
key
|
Key
|
Unused; reserved for subclass compatibility. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Array |
Array
|
Log-probability of |