Reference
Contents
Index
AttentionLayer.attentioncnnAttentionLayer.crop_centerAttentionLayer.decollocateAttentionLayer.interpolateAttentionLayer.uncrop_centerAttentionLayer.uncrop_center_concat
AttentionLayer.attentioncnn — Methodattentioncnn(; T, D, data_ch, radii, channels, activations, use_bias, use_attention, emb_sizes, Ns, patch_sizes, n_heads, sum_attention, rng = Random.default_rng(), use_cuda)Constructs a convolutional neural network model closure(u, θ) that predicts the commutator error (i.e. closure). Before every convolutional layer, the input is augmented with the attention mechanism.
Arguments
T: The data type of the model (default:Float32).D: The data dimension.data_ch: The number of data channels (usually should be equal toD).radii: An array (size n_layers) with the radii of the kernels for the convolutional layers. Kernels will be symmetrical of size2r+1.channels: An array (size n_layers) with channel sizes for the convolutional layers.activations: An array (size n_layers) with activation functions for the convolutional layers.use_bias: An array (size n_layers) with booleans indicating whether to use bias in each convolutional layer.use_attention: A boolean indicating whether to use the attention mechanism.emb_sizes: An array (size n_layers) with the embedding sizes for the attention mechanism.Ns: The spatial dimension for all the attention layers.patch_sizes: An array (size n_layers) with the patch sizes for the attention mechanism.n_heads: An array (size n_layers) with the number of heads for the attention mechanism.sum_attention: An array (size n_layers) with booleans indicating whether to sum the attention output with the input.rng: A random number generator (default:Random.default_rng()).use_cuda: A boolean indicating whether to use CUDA (default:false).
Returns
A tuple (chain, params, state) where
chain: The constructed Lux.Chain model.params: The parameters of the model.state: The state of the model.
AttentionLayer.crop_center — MethodCrop the center of the input array to the desired size.
Arguments
x: The input array of size[N, N, d, b].M: The desired size of the output array.
Returns
An array y of size [M, M, d, b] cropped from the center of x.
AttentionLayer.decollocate — MethodInterpolate closure force from volume centers to volume faces.
AttentionLayer.interpolate — MethodInterpolate velocity components to volume centers.
TODO, D and dir can be parameters istead of arguments I think
AttentionLayer.uncrop_center — MethodAdd the input array y to the center of the array x.
Arguments
x: The original array of size[N, N, d, b].y: The array to be added to the center ofx, of size[M, M, d, b].
Returns
An array z of size [N, N, d, b] with y added to the center of x.
AttentionLayer.uncrop_center_concat — MethodConcatenate the input array y to the center of the array x along the third dimension.
Arguments
x: The original array of size[N, N, d, b].y: The array to be concatenated to the center ofx, of size[M, M, d, b].
Returns
An array z of size [N, N, d + d', b] with y concatenated to the center of x along the third dimension.