Models#
Here we provide the code for the models and the layers used. Everything has been implemented in PyTorch.
ScoreNetwork_A_CC.py: ScoreNetworkA_CC class. This is a ScoreNetwork model for the adjacency matrix A in the higher-order domain.
- class ccsd.src.models.ScoreNetwork_A_CC.ScoreNetworkA_CC(max_feat_num: int, max_node_num: int, d_min: int, d_max: int, nhid: int, nhid_h: int, num_layers: int, num_layers_h: int, num_linears: int, num_linears_h: int, c_init: int, c_hid: int, c_hid_h: int, c_final: int, c_final_h: int, adim: int, adim_h: int, num_heads: int = 4, num_heads_h: int = 4, conv: str = 'GCN', conv_hodge: str = 'HCN', use_bn: bool = False, is_cc: bool = True)[source]#
Bases:
Module
ScoreNetworkA_CC to calculate the score with respect to the adjacency matrix A in the higher-order domain.
- __init__(max_feat_num: int, max_node_num: int, d_min: int, d_max: int, nhid: int, nhid_h: int, num_layers: int, num_layers_h: int, num_linears: int, num_linears_h: int, c_init: int, c_hid: int, c_hid_h: int, c_final: int, c_final_h: int, adim: int, adim_h: int, num_heads: int = 4, num_heads_h: int = 4, conv: str = 'GCN', conv_hodge: str = 'HCN', use_bn: bool = False, is_cc: bool = True) None [source]#
Initialize the ScoreNetworkA_CC model.
- Parameters:
max_feat_num (int) – maximum number of node features
max_node_num (int) – maximum number of nodes in the graphs
d_min (int) – minimum dimension of the rank-2 cells
d_max (int) – maximum dimension of the rank-2 cells
nhid (int) – number of hidden units in AttentionLayer layers
nhid_h (int) – number of hidden units in HodgeAdjAttentionLayer layers
num_layers (int) – number of AttentionLayer layers
num_layers_h (int) – number of HodgeAdjAttentionLayer layers
num_linears (int) – number of linear layers in the MLP of each AttentionLayer
num_linears_h (int) – number of linear layers in the MLP of each HodgeAdjAttentionLayer
c_init (int) – input dimension of the AttentionLayer and the HodgeAdjAttentionLayer (number of DenseGCNConv and DenseHCNConv) Also the number of power iterations to “duplicate” the adjacency matrix as an input
c_hid (int) – number of hidden units in the MLP of each AttentionLayer
c_hid_h (int) – number of hidden units in the MLP of each HodgeAdjAttentionLayer
c_final (int) – output dimension of the MLP of the last AttentionLayer
c_final_h (int) – output dimension of the MLP of the last HodgeAdjAttentionLayer
adim (int) – attention dimension for the AttentionLayer (except for the first layer).
adim_h (int) – attention dimension for the HodgeAdjAttentionLayer (except for the first layer).
num_heads (int, optional) – number of heads for the Attention. Defaults to 4.
num_heads_h (int, optional) – number of heads for the HodgeAdjAttention. Defaults to 4.
conv (str, optional) – type of convolutional layer, choose from [HCN, MLP]. Defaults to “GCN”.
conv_hodge (str, optional) – type of convolutional layer for the hodge layers, choose from [HCN, MLP]. Defaults to “HCN”.
use_bn (bool, optional) – whether to use batch normalization in the MLP and the AttentionLayer(s). Defaults to False.
is_cc (bool, optional) – True if we generate combinatorial complexes. Defaults to True.
- forward(x: Tensor, adj: Tensor, rank2: Tensor, flags: Tensor | None = None) Tensor [source]#
Forward pass of the ScoreNetworkA_CC. Returns the score with respect to the adjacency matrix A.
- Parameters:
x (torch.Tensor) – node feature matrix
adj (torch.Tensor) – adjacency matrix
rank2 (torch.Tensor) – rank-2 incidence matrix
flags (Optional[torch.Tensor], optional) – optional flags for the score. Defaults to None.
- Returns:
score with respect to the adjacency matrix A
- Return type:
torch.Tensor
ScoreNetwork_F.py: ScoreNetworkF class. This is a ScoreNetwork model that operates on the rank2 incidence matrix of the combinatorial complex.
- class ccsd.src.models.ScoreNetwork_F.ScoreNetworkF(num_layers_mlp: int, num_layers: int, num_linears: int, nhid: int, c_hid: int, c_final: int, cnum: int, max_node_num: int, d_min: int, d_max: int, use_hodge_mask: bool = True, use_bn: bool = False, is_cc: bool = True)[source]#
Bases:
Module
ScoreNetworkF to calculate the score with respect to the rank2 incidence matrix.
- __init__(num_layers_mlp: int, num_layers: int, num_linears: int, nhid: int, c_hid: int, c_final: int, cnum: int, max_node_num: int, d_min: int, d_max: int, use_hodge_mask: bool = True, use_bn: bool = False, is_cc: bool = True) None [source]#
Initialize the ScoreNetworkF model.
- Parameters:
num_layers_mlp (int) – number of layers in the final MLP
num_layers (int) – number of HodgeNetworkLayer layers
num_linears (int) – number of additional layers in the MLP of the HodgeNetworkLayer
nhid (int) – number of hidden units in the MLP of the HodgeNetworkLayer
c_hid (int) – number of output channels in the intermediate HodgeNetworkLayer(s)
c_final (int) – number of output channels in the last HodgeNetworkLayer
cnum (int) – number of power iterations of the rank2 incidence matrix Also number of input channels in the first HodgeNetworkLayer.
max_node_num (int) – maximum number of nodes in the combinatorial complex
d_min (int) – minimum size of the rank2 cells
d_max (int) – maximum size of the rank2 cells
use_hodge_mask (bool, optional) – whether to use the hodge mask. Defaults to False.
use_bn (bool, optional) – whether to use batch normalization in the MLP. Defaults to False.
is_cc (bool, optional) – whether the input is a combinatorial complex (ALWAYS THE CASE). Defaults to True.
- forward(x: Tensor, adj: Tensor, rank2: Tensor, flags: Tensor | None = None) Tensor [source]#
Forward pass of the ScoreNetworkF. Returns the score with respect to the rank2 incidence matrix.
- Parameters:
x (torch.Tensor) – node feature matrix
adj (torch.Tensor) – adjacency matrix
rank2 (torch.Tensor) – rank2 incidence matrix
flags (Optional[torch.Tensor], optional) – optional flags for the score. Defaults to None.
- Returns:
score with respect to the rank2 incidence matrix
- Return type:
torch.Tensor
ScoreNetwork_A.py: ScoreNetworkA and BaselineNetwork classes. These are ScoreNetwork models for the adjacency matrix A.
Adapted from Jo, J. & al (2022)
Almost left untouched.
- class ccsd.src.models.ScoreNetwork_A.BaselineNetworkLayer(num_linears: int, conv_input_dim: int, conv_output_dim: int, input_dim: int, output_dim: int, use_bn: bool = False)[source]#
Bases:
Module
BaselineNetworkLayer that operates on tensors derived from an adjacency matrix A. Used in the BaselineNetwork model.
- __init__(num_linears: int, conv_input_dim: int, conv_output_dim: int, input_dim: int, output_dim: int, use_bn: bool = False) None [source]#
Initialize the BaselineNetworkLayer.
- Parameters:
num_linears (int) – number of linear layers in the MLP (except the first one)
conv_input_dim (int) – input dimension of the DenseGCNConv layers
conv_output_dim (int) – output dimension of the DenseGCNConv layers
input_dim (int) – number of DenseGCNConv layers (part of the input dimension of the final MLP)
output_dim (int) – output dimension of the final MLP
use_bn (bool, optional) – whether to use batch normalization in the MLP. Defaults to False.
- forward(x: Tensor, adj: Tensor, flags: Tensor | None) Tuple[Tensor, Tensor] [source]#
Forward pass of the BaselineNetworkLayer.
- Parameters:
x (torch.Tensor) – node feature matrix
adj (torch.Tensor) – adjacency matrix
flags (Optional[torch.Tensor]) – optional flags for the node features
- Returns:
node feature matrix and adjacency matrix
- Return type:
Tuple[torch.Tensor, torch.Tensor]
- class ccsd.src.models.ScoreNetwork_A.BaselineNetwork(max_feat_num: int, max_node_num: int, nhid: int, num_layers: int, num_linears: int, c_init: int, c_hid: int, c_final: int, adim: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False, is_cc: bool = False)[source]#
Bases:
Module
BaselineNetwork to calculate the score with respect to the adjacency matrix A.
- __init__(max_feat_num: int, max_node_num: int, nhid: int, num_layers: int, num_linears: int, c_init: int, c_hid: int, c_final: int, adim: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False, is_cc: bool = False) None [source]#
Initialize the BaselineNetwork.
- Parameters:
max_feat_num (int) – maximum number of node features
max_node_num (int) – maximum number of nodes in the graphs
nhid (int) – number of hidden units in BaselineNetworkLayer layers
num_layers (int) – number of BaselineNetworkLayer layers
num_linears (int) – number of linear layers in the MLP of each BaselineNetworkLayer
c_init (int) – input dimension of the BaselineNetworkLayer (number of DenseGCNConv) Also the number of power iterations to “duplicate” the adjacency matrix as an input
c_hid (int) – number of hidden units in the MLP of each BaselineNetworkLayer
c_final (int) – output dimension of the MLP of the last BaselineNetworkLayer
adim (int) – UNUSED HERE. attention dimension (except for the first layer).
num_heads (int, optional) – UNUSED HERE. number of heads for the Attention. Defaults to 4.
conv (str, optional) – UNUSED HERE. type of convolutional layer, choose from [GCN, MLP]. Defaults to “GCN”.
use_bn (bool, optional) – whether to use batch normalization in the MLP and the BaselineNetworkLayer(s). Defaults to False.
is_cc (bool, optional) – True if we generate combinatorial complexes. Defaults to False.
- forward_graph(x: Tensor, adj: Tensor, flags: Tensor | None = None) Tensor [source]#
Forward pass of the BaselineNetwork. Returns the score with respect to the adjacency matrix A.
- Parameters:
x (torch.Tensor) – node feature matrix
adj (torch.Tensor) – adjacency matrix
flags (Optional[torch.Tensor], optional) – optional flags for the score. Defaults to None.
- Returns:
score with respect to the adjacency matrix A
- Return type:
torch.Tensor
- forward_cc(x: Tensor, adj: Tensor, rank2: Tensor, flags: Tensor | None = None) Tensor [source]#
Forward pass of the BaselineNetwork. Returns the score with respect to the adjacency matrix A.
- Parameters:
x (torch.Tensor) – node feature matrix
adj (torch.Tensor) – adjacency matrix
rank2 (torch.Tensor) – rank2 incidence matrix
flags (Optional[torch.Tensor], optional) – optional flags for the score. Defaults to None.
- Returns:
score with respect to the adjacency matrix A
- Return type:
torch.Tensor
- class ccsd.src.models.ScoreNetwork_A.ScoreNetworkA(max_feat_num: int, max_node_num: int, nhid: int, num_layers: int, num_linears: int, c_init: int, c_hid: int, c_final: int, adim: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False, is_cc: bool = False)[source]#
Bases:
Module
ScoreNetworkA to calculate the score with respect to the adjacency matrix A.
- __init__(max_feat_num: int, max_node_num: int, nhid: int, num_layers: int, num_linears: int, c_init: int, c_hid: int, c_final: int, adim: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False, is_cc: bool = False) None [source]#
Initialize the ScoreNetworkA model.
- Parameters:
max_feat_num (int) – maximum number of node features
max_node_num (int) – maximum number of nodes in the graphs
nhid (int) – number of hidden units in AttentionLayer layers
num_layers (int) – number of AttentionLayer layers
num_linears (int) – number of linear layers in the MLP of each AttentionLayer
c_init (int) – input dimension of the AttentionLayer (number of DenseGCNConv) Also the number of power iterations to “duplicate” the adjacency matrix as an input
c_hid (int) – number of hidden units in the MLP of each AttentionLayer
c_final (int) – output dimension of the MLP of the last AttentionLayer
adim (int) – attention dimension (except for the first layer).
num_heads (int, optional) – number of heads for the Attention. Defaults to 4.
conv (str, optional) – type of convolutional layer, choose from [GCN, MLP]. Defaults to “GCN”.
use_bn (bool, optional) – whether to use batch normalization in the MLP and the AttentionLayer(s). Defaults to False.
is_cc (bool, optional) – True if we generate combinatorial complexes. Defaults to False.
- forward_graph(x: Tensor, adj: Tensor, flags: Tensor | None = None) Tensor [source]#
Forward pass of the ScoreNetworkA. Returns the score with respect to the adjacency matrix A.
- Parameters:
x (torch.Tensor) – node feature matrix
adj (torch.Tensor) – adjacency matrix
flags (Optional[torch.Tensor], optional) – optional flags for the score. Defaults to None.
- Returns:
score with respect to the adjacency matrix A
- Return type:
torch.Tensor
- forward_cc(x: Tensor, adj: Tensor, rank2: Tensor, flags: Tensor | None = None) Tensor [source]#
Forward pass of the ScoreNetworkA. Returns the score with respect to the adjacency matrix A.
- Parameters:
x (torch.Tensor) – node feature matrix
adj (torch.Tensor) – adjacency matrix
rank2 (torch.Tensor) – rank2 incidence matrix
flags (Optional[torch.Tensor], optional) – optional flags for the score. Defaults to None.
- Returns:
score with respect to the adjacency matrix A
- Return type:
torch.Tensor
ScoreNetwork_X.py: ScoreNetworkX and ScoreNetworkX_GMH classes. These are ScoreNetwork models for the node feature matrix X.
Adapted from Jo, J. & al (2022)
Almost left untouched.
- class ccsd.src.models.ScoreNetwork_X.ScoreNetworkX(max_feat_num: int, depth: int, nhid: int, use_bn: bool = False, is_cc: bool = False)[source]#
Bases:
Module
ScoreNetworkX network model. Returns the score with respect to the node feature matrix X.
- __init__(max_feat_num: int, depth: int, nhid: int, use_bn: bool = False, is_cc: bool = False) None [source]#
Initialize ScoreNetworkX.
- Parameters:
max_feat_num (int) – maximum number of node features (input and output dimension of the network)
depth (int) – number of DenseGCNConv layers
nhid (int) – number of hidden units in DenseGCNConv layers
use_bn (bool, optional) – True if we use batch normalization in the MLP. Defaults to False.
is_cc (bool, optional) – True if we generate combinatorial complexes. Defaults to False.
- forward_graph(x: Tensor, adj: Tensor, flags: Tensor | None) Tensor [source]#
Forward pass of the ScoreNetworkX model.
- Parameters:
x (torch.Tensor) – node feature matrix (B x N x F)
adj (torch.Tensor) – adjacency matrix (B x N x N)
flags (Optional[torch.Tensor]) – optional mask matrix (B x N x 1)
- Returns:
score with respect to the node feature matrix (B x N x F)
- Return type:
torch.Tensor
- forward_cc(x: Tensor, adj: Tensor, rank2: Tensor, flags: Tensor | None) Tensor [source]#
Forward pass of the ScoreNetworkX model.
- Parameters:
x (torch.Tensor) – node feature matrix (B x N x F)
adj (torch.Tensor) – adjacency matrix (B x N x N)
rank2 (torch.Tensor) – rank2 incidence matrix (B x (NC2) x K)
flags (Optional[torch.Tensor]) – optional mask matrix (B x N x 1)
- Returns:
score with respect to the node feature matrix (B x N x F)
- Return type:
torch.Tensor
- class ccsd.src.models.ScoreNetwork_X.ScoreNetworkX_GMH(max_feat_num: int, depth: int, nhid: int, num_linears: int, c_init: int, c_hid: int, c_final: int, adim: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False, is_cc: bool = False)[source]#
Bases:
Module
ScoreNetworkX network model. Returns the score with respect to the node feature matrix X.
- __init__(max_feat_num: int, depth: int, nhid: int, num_linears: int, c_init: int, c_hid: int, c_final: int, adim: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False, is_cc: bool = False) None [source]#
Initialize ScoreNetworkX_GMH.
- Parameters:
max_feat_num (int) – maximum number of node features (input and output dimension of the network)
depth (int) – number of DenseGCNConv layers
nhid (int) – number of hidden units in DenseGCNConv layers
num_linears (int) – number of linear layers in AttentionLayer
c_init (int) – input dimension of the AttentionLayer (number of attention) Also the number of power iterations to “duplicate” the adjacency matrix as an input
c_hid (int) – output dimension of the MLP in the AttentionLayer
c_final (int) – output dimension of the MLP in the AttentionLayer for the last layer of this model
adim (int) – attention dimension (except for the first layer)
num_heads (int, optional) – number of heads for the Attention. Defaults to 4.
conv (str, optional) – type of convolutional layer, choose from [GCN, MLP]. Defaults to “GCN”.
use_bn (bool, optional) – True if we use batch normalization in the MLP and the AttentionLayer(s). Defaults to False.
is_cc (bool, optional) – True if we generate combinatorial complexes. Defaults to False.
- forward_graph(x: Tensor, adj: Tensor, flags: Tensor | None) Tensor [source]#
Forward pass of the ScoreNetworkX_GMH model.
- Parameters:
x (torch.Tensor) – node feature matrix (B x N x F)
adj (torch.Tensor) – adjacency matrix (B x N x N)
flags (Optional[torch.Tensor]) – optional mask matrix (B x N x 1)
- Returns:
score with respect to the node feature matrix (B x N x F)
- Return type:
torch.Tensor
- forward_cc(x: Tensor, adj: Tensor, rank2: Tensor, flags: Tensor | None) Tensor [source]#
Forward pass of the ScoreNetworkX_GMH model.
- Parameters:
x (torch.Tensor) – node feature matrix (B x N x F)
adj (torch.Tensor) – adjacency matrix (B x N x N)
rank2 (torch.Tensor) – rank2 incidence matrix (B x (NC2) x K)
flags (Optional[torch.Tensor]) – optional mask matrix (B x N x 1)
- Returns:
score with respect to the node feature matrix (B x N x F)
- Return type:
torch.Tensor
attention.py: HodgeAttention and HodgeAdjAttentionLayer classes for the ScoreNetwork models.
- class ccsd.src.models.hodge_attention.HodgeAttention(in_dim: int, attn_dim: int, out_dim: int, num_heads: int = 4, conv: str = 'HCN')[source]#
Bases:
Module
Hodge Combinatorial Complexes Multi-Head (HCCMH) Attention layer
Used in the HodgeAdjAttentionLayer below
- __init__(in_dim: int, attn_dim: int, out_dim: int, num_heads: int = 4, conv: str = 'HCN') None [source]#
Initialize the HodgeAttention layer
- Parameters:
in_dim (int) – input dimension
attn_dim (int) – attention dimension
out_dim (int) – output dimension
num_heads (int, optional) – number of attention heads. Defaults to 4.
conv (str, optional) – type of convolutional layer, choose from [HCN, MLP]. Defaults to “HCN”.
- forward(hodge_adj: Tensor, rank2: Tensor, flags: Tensor, attention_mask: Tensor | None = None) Tuple[Tensor, Tensor] [source]#
Forward pass of the HodgeAttention layer. Returns the value, attention matrix.
- Parameters:
hodge_adj (torch.Tensor) – hodge adjacency matrix
rank2 (torch.Tensor) – rank-2 incidence matrix
flags (torch.Tensor) – node flags
attention_mask (Optional[torch.Tensor], optional) – attention mask for the attention matrix. Defaults to None.
- Returns:
value, attention matrix
- Return type:
Tuple[torch.Tensor, torch.Tensor]
- get_ccnn(in_dim: int, attn_dim: int, out_dim: int, conv: str = 'HCN') Tuple[Callable[[Tensor, Tensor], Tensor] | Callable[[Tensor], Tensor], Callable[[Tensor, Tensor], Tensor] | Callable[[Tensor], Tensor], Callable[[Tensor, Tensor], Tensor]] [source]#
Initialize the HCNs
- Parameters:
in_dim (int) – input dimension
attn_dim (int) – attention dimension
out_dim (int) – output dimension
conv (str, optional) – type of convolutional layer, choose from [HCN, MLP]. Defaults to “HCN”.
- Raises:
NotImplementedError – raise an error if the convolutional layer is not implemented
- Returns:
three GNNs, one for the query, one for the key, and one for the value
- Return type:
Tuple[Union[Callable[[torch.Tensor, torch.Tensor], torch.Tensor], Callable[[torch.Tensor], torch.Tensor]], Union[Callable[[torch.Tensor, torch.Tensor], torch.Tensor], Callable[[torch.Tensor], torch.Tensor]], Callable[[torch.Tensor, torch.Tensor], torch.Tensor]]
- class ccsd.src.models.hodge_attention.HodgeAdjAttentionLayer(num_linears: int, input_dim: int, attn_dim: int, conv_output_dim: int, N: int, d_min: int, d_max: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False)[source]#
Bases:
Module
HodgeAdjAttentionLayer for ScoreNetworkA_CC
- __init__(num_linears: int, input_dim: int, attn_dim: int, conv_output_dim: int, N: int, d_min: int, d_max: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False) None [source]#
Initialize the HodgeAdjAttentionLayer
- Parameters:
num_linears (int) – number of linear layers in the MLPs
input_dim (int) – input dimension of the HodgeAdjAttentionLayer (also number of HodgeAttention)
attn_dim (int) – attention dimension
conv_output_dim (int) – output dimension of the MLP (output number of channels)
N (int) – maximum number of nodes
d_min (int) – minimum size of rank-2 cells
d_max (int) – maximum size of rank-2 cells
num_heads (int, optional) – number of heads for the Attention. Defaults to 4.
conv (str, optional) – type of convolutional layer, choose from [GCN, MLP]. Defaults to “GCN”.
use_bn (bool, optional) – whether to use batch normalization in the MLP. Defaults to False.
- forward(hodge_adj: Tensor, rank2: Tensor, flags: Tensor | None) Tuple[Tensor, Tensor] [source]#
Forward pass for the HodgeAdjAttentionLayer. Returns a hodge adjacency matrix and a rank-2 incidence matrix.
- Parameters:
hodge_adj (torch.Tensor) – hodge adjacency matrix (B x C_i x (NC2) x (NC2)) C_i is the number of input channels
rank2 (torch.Tensor) – rank-2 incidence matrix (B x (NC2) x K)
flags (Optional[torch.Tensor]) – flags for the nodes
- Returns:
- hodge adjacency matrix and a rank-2 incidence matrix (B x C_o x (NC2) x (NC2)), (B x (NC2) x K)
C_o is the number of output channels
- Return type:
Tuple[torch.Tensor, torch.Tensor]
attention.py: DenseHCNConv and HodgeNetworkLayer classes for the ScoreNetwork models and other layers.
- class ccsd.src.models.hodge_layers.HodgeNetworkLayer(num_linears: int, input_dim: int, nhid: int, output_dim: int, d_min: int, d_max: int, use_bn: bool = False)[source]#
Bases:
Module
HodgeNetworkLayer that operates on tensors derived from a rank2 incidence matrix F. Used in the ScoreNetworkF model.
- __init__(num_linears: int, input_dim: int, nhid: int, output_dim: int, d_min: int, d_max: int, use_bn: bool = False) None [source]#
Initialize the HodgeNetworkLayer.
- Parameters:
num_linears (int) – number of linear layers in the MLP (except the first one)
input_dim (int) – input dimension of the MLP
nhid (int) – number of hidden units in the MLP
output_dim (int) – output dimension of the MLP
d_min (int) – minimum size of the rank2 cells
d_max (int) – maximum size of the rank2 cells
use_bn (bool, optional) – whether to use batch normalization in the MLP. Defaults to False.
- forward(rank2: Tensor, N: int, flags: Tensor | None) Tuple[Tensor, Tensor, Tensor] [source]#
Forward pass of the HodgeNetworkLayer.
- Parameters:
rank2 (torch.Tensor) – rank2 incidence matrix
N (int) – maximum number of nodes
flags (Optional[torch.Tensor]) – optional flags for the rank2 incidence matrix
- Returns:
node feature matrix, adjacency matrix, and rank2 incidence matrix
- Return type:
Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
- class ccsd.src.models.hodge_layers.DenseHCNConv(in_channels: int, out_channels: int, bias: bool = True)[source]#
Bases:
Module
DenseHCN layer (Hodge Convolutional Network layer).
- __init__(in_channels: int, out_channels: int, bias: bool = True) None [source]#
Initialize the DenseHCNConv layer.
- Parameters:
in_channels (int) – input channels: must be the the last dimension of a rank-2 incidence matrix
out_channels (int) – output channels: output dimension of the layer, could be an attention dimension or the output dimension of our value matrix (last dimension of a rank-2 incidence matrix)
bias (bool, optional) – if True, add bias parameters. Defaults to True.
- reset_parameters() None [source]#
Reset the parameters of the DenseHCNConv layer. Initialize them with Glorot uniform initialization for the weight and zeros for the bias.
- forward(hodge_adj: Tensor, rank2: Tensor, mask: Tensor | None = None) Tensor [source]#
Forward pass of the DenseHCNConv layer.
- Parameters:
hodge_adj (torch.Tensor) – hodge adjacency matrix (B * (NC2) * (NC2))
rank2 (torch.Tensor) – adjacency matrix (B * (NC2) * K)
mask (Optional[torch.Tensor], optional) – Optional mask for the output. Defaults to None.
- Returns:
output of the DenseHCNConv layer (B * (NC2) * F_o)
- Return type:
torch.Tensor
- class ccsd.src.models.hodge_layers.BaselineBlock(in_dim: int, hidden_dim: int, out_dim: int)[source]#
Bases:
Module
Combinatorial Complexes BaselineBlock layer
Used in the HodgeBaselineLayer below
- __init__(in_dim: int, hidden_dim: int, out_dim: int) None [source]#
Initialize the BaselineBlock layer
- Parameters:
in_dim (int) – input dimension
hidden_dim (int) – hidden dimension
out_dim (int) – output dimension
- forward(hodge_adj: Tensor, rank2: Tensor, flags: Tensor, attention_mask: Tensor | None = None) Tuple[Tensor, Tensor] [source]#
Forward pass of the BaselineBlock layer. Returns the value, attention matrix.
- Parameters:
hodge_adj (torch.Tensor) – hodge adjacency matrix
rank2 (torch.Tensor) – rank-2 incidence matrix
flags (torch.Tensor) – node flags
attention_mask (Optional[torch.Tensor], optional) – UNUSED HERE. Defaults to None.
- Returns:
rank2, hodge_adj matrix
- Return type:
Tuple[torch.Tensor, torch.Tensor]
- class ccsd.src.models.hodge_layers.HodgeBaselineLayer(num_linears: int, input_dim: int, hidden_dim: int, conv_output_dim: int, N: int, d_min: int, d_max: int, use_bn: bool = False)[source]#
Bases:
Module
HodgeBaselineLayer for ScoreNetworkA_Base_CC with baseline blocks
- __init__(num_linears: int, input_dim: int, hidden_dim: int, conv_output_dim: int, N: int, d_min: int, d_max: int, use_bn: bool = False) None [source]#
Initialize the HodgeBaselineLayer
- Parameters:
num_linears (int) – number of linear layers in the MLPs
input_dim (int) – input dimension of the HodgeBaselineLayer (also number of BaselineBlock)
hidden_dim (int) – hidden dimension
conv_output_dim (int) – output dimension of the MLP (output number of channels)
N (int) – maximum number of nodes
d_min (int) – minimum size of rank-2 cells
d_max (int) – maximum size of rank-2 cells
use_bn (bool, optional) – whether to use batch normalization in the MLP. Defaults to False.
- forward(hodge_adj: Tensor, rank2: Tensor, flags: Tensor | None) Tuple[Tensor, Tensor] [source]#
Forward pass for the HodgeBaselineLayer. Returns a hodge adjacency matrix and a rank-2 incidence matrix.
- Parameters:
hodge_adj (torch.Tensor) – hodge adjacency matrix (B x C_i x (NC2) x (NC2)) C_i is the number of input channels
rank2 (torch.Tensor) – rank-2 incidence matrix (B x (NC2) x K)
flags (Optional[torch.Tensor]) – flags for the nodes
- Returns:
- hodge adjacency matrix and a rank-2 incidence matrix (B x C_o x (NC2) x (NC2)), (B x (NC2) x K)
C_o is the number of output channels
- Return type:
Tuple[torch.Tensor, torch.Tensor]
attention.py: Attention and AttentionLayer classes for the ScoreNetwork models.
Adapted from Jo, J. & al (2022)
Almost left untouched.
- class ccsd.src.models.attention.Attention(in_dim: int, attn_dim: int, out_dim: int, num_heads: int = 4, conv: str = 'GCN')[source]#
Bases:
Module
Graph Multi-Head (GMH) Attention layer
Adapted from Baek et al. (2021)
Used in the AttentionLayer below
- __init__(in_dim: int, attn_dim: int, out_dim: int, num_heads: int = 4, conv: str = 'GCN') None [source]#
Initialize the Attention layer
- Parameters:
in_dim (int) – input dimension
attn_dim (int) – attention dimension
out_dim (int) – output dimension
num_heads (int, optional) – number of attention heads. Defaults to 4.
conv (str, optional) – type of convolutional layer, choose from [GCN, MLP]. Defaults to “GCN”.
- forward(x: Tensor, adj: Tensor, flags: Tensor, attention_mask: Tensor | None = None) Tuple[Tensor, Tensor] [source]#
Forward pass of the Attention layer. Returns the value and attention matrix.
- Parameters:
x (torch.Tensor) – node features
adj (torch.Tensor) – adjacency matrix
flags (torch.Tensor) – node flags
attention_mask (Optional[torch.Tensor], optional) – attention mask for the attention matrix. Defaults to None.
- Returns:
value and attention matrix
- Return type:
Tuple[torch.Tensor, torch.Tensor]
- get_gnn(in_dim: int, attn_dim: int, out_dim: int, conv: str = 'GCN') Tuple[Callable[[Tensor, Tensor], Tensor] | Callable[[Tensor], Tensor], Callable[[Tensor, Tensor], Tensor] | Callable[[Tensor], Tensor], Callable[[Tensor, Tensor], Tensor]] [source]#
Initialize the three GNNs
- Parameters:
in_dim (int) – input dimension
attn_dim (int) – attention dimension
out_dim (int) – output dimension
conv (str, optional) – type of convolutional layer, choose from [GCN, MLP]. Defaults to “GCN”.
- Raises:
NotImplementedError – raise an error if the convolutional layer is not implemented
- Returns:
three GNNs, one for the query, one for the key, and one for the value
- Return type:
Tuple[Union[Callable[[torch.Tensor, torch.Tensor], torch.Tensor], Callable[[torch.Tensor], torch.Tensor]], Union[Callable[[torch.Tensor, torch.Tensor], torch.Tensor], Callable[[torch.Tensor], torch.Tensor]], Callable[[torch.Tensor, torch.Tensor], torch.Tensor]]
- class ccsd.src.models.attention.AttentionLayer(num_linears: int, conv_input_dim: int, attn_dim: int, conv_output_dim: int, input_dim: int, output_dim: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False)[source]#
Bases:
Module
AttentionLayer for ScoreNetworkA
- __init__(num_linears: int, conv_input_dim: int, attn_dim: int, conv_output_dim: int, input_dim: int, output_dim: int, num_heads: int = 4, conv: str = 'GCN', use_bn: bool = False) None [source]#
Initialize the AttentionLayer
- Parameters:
num_linears (int) – number of linear layers in the MLP
conv_input_dim (int) – input dimension of the convolutional layer
attn_dim (int) – attention dimension
conv_output_dim (int) – output dimension of the convolutional layer
input_dim (int) – input dimension of the AttentionLayer (number of Attention)
output_dim (int) – output dimension of the MLP
num_heads (int, optional) – number of heads for the Attention. Defaults to 4.
conv (str, optional) – type of convolutional layer, choose from [GCN, MLP]. Defaults to “GCN”.
use_bn (bool, optional) – whether to use batch normalization in the MLP. Defaults to False.
- forward(x: Tensor, adj: Tensor, flags: Tensor | None) Tuple[Tensor, Tensor] [source]#
Forward pass for the AttentionLayer. Returns a node feature matrix and an adjacency matrix.
- Parameters:
x (torch.Tensor) – node feature matrix (B x N x F_i) F_i is the input node feature dimension (=input_dim in GCNConv)
adj (torch.Tensor) – adjacency matrix (B x C_i x N x N)
flags (Optional[torch.Tensor]) – flags for the nodes
- Returns:
- node feature matrix and adjacency matrix (B x N x F_o), (B x C_o x N x N)
F_o is the output node feature dimension (=output_dim in GCNConv)
- Return type:
Tuple[torch.Tensor, torch.Tensor]
layers.py: DenseGCNConv and MLP class for the Attention layers/the ScoreNetwork models. It also contains some parameters initialization functions.
Adapted from Jo, J. & al (2022)
Almost left untouched.
- ccsd.src.models.layers.glorot(tensor: Tensor | None) None [source]#
Initialize the tensor with Glorot uniform initialization. (Glorot uniform initialization is also called Xavier uniform initialization)
- Parameters:
tensor (Optional[torch.Tensor]) – tensor to be initialized
- ccsd.src.models.layers.zeros(tensor: Tensor | None) None [source]#
Initialize the tensor with zeros.
- Parameters:
tensor (Optional[torch.Tensor]) – tensor to be initialized
- ccsd.src.models.layers.reset(value: Any) None [source]#
Reset the parameters of a value object and all its children. The value object must have a reset_parameters method to reset its parameters and a children method that returns its children to also reset its children if any.
- Parameters:
value (Any) – value object with parameters to be reset
- class ccsd.src.models.layers.DenseGCNConv(in_channels: int, out_channels: int, improved: bool = False, bias: bool = True)[source]#
Bases:
Module
Dense GCN layer (Graph Convolutional Network layer) with adjacency matrix.
It is similar to the operator described in Kipf, T. N., & Welling, M. (2016), Semi-Supervised Classification with Graph Convolutional Networks
See also torch geometric (torch_geometric.nn.conv.GCNConv)
- __init__(in_channels: int, out_channels: int, improved: bool = False, bias: bool = True) None [source]#
Initialize the DenseGCNConv layer.
- Parameters:
in_channels (int) – input channels
out_channels (int) – output channels
improved (bool, optional) – if True, put 2 in the diagonal coefficients of the adjacency matrix, else 1. Defaults to False.
bias (bool, optional) – if True, add bias parameters. Defaults to True.
- reset_parameters() None [source]#
Reset the parameters of the DenseGCNConv layer. Initialize them with Glorot uniform initialization for the weight and zeros for the bias.
- forward(x: Tensor, adj: Tensor, mask: Tensor | None = None, add_loop: bool = True) Tensor [source]#
Forward pass of the DenseGCNConv layer.
- Parameters:
x (torch.Tensor) – node feature matrix (B * N * F_i)
adj (torch.Tensor) – adjacency matrix (B * N * N)
mask (Optional[torch.Tensor], optional) – Optional mask for the output. Defaults to None.
add_loop (bool, optional) – if False, the layer will not automatically add self-loops to the adjacency matrices. Defaults to True.
- Returns:
output of the DenseGCNConv layer (B * N * F_o)
- Return type:
torch.Tensor
- class ccsd.src.models.layers.MLP(num_layers: int, input_dim: int, hidden_dim: int, output_dim: int, use_bn: bool = False, activate_func: ~typing.Callable[[~torch.Tensor], ~torch.Tensor] = <function relu>)[source]#
Bases:
Module
Multi-Layer Perceptron (MLP) layer.
- __init__(num_layers: int, input_dim: int, hidden_dim: int, output_dim: int, use_bn: bool = False, activate_func: ~typing.Callable[[~torch.Tensor], ~torch.Tensor] = <function relu>) None [source]#
Initialize the MLP layer.
- Parameters:
num_layers (int) – number of additional layers in the neural networks (so except the input layer). If num_layers=1, this reduces to a linear model.
input_dim (int) – input dimension
hidden_dim (int) – hidden dimension of the intermediate layers
output_dim (int) – output dimension
use_bn (bool, optional) – if True, add Batch Normalization after each hidden layer. Defaults to False.
activate_func (Callable[[torch.Tensor], torch.Tensor], optional) – activation layer (must be non-linear) to be applied at the end of each layer. Defaults to F.relu.
- Raises:
ValueError – raise an error if the number of layers is not greater or equal to 1.
- reset_parameters() None [source]#
Reset the parameters of the MLP layer. Initialize them with Glorot uniform initialization for the weight and zeros for the bias.
- forward(x: Tensor) Tensor [source]#
Forward pass of the MLP layer.
- Parameters:
x (torch.Tensor) – input tensor (num_classes * B * N * F_i) num_classes is the number of classes of input, to be treated with different weights and biases B is the batch size N is the maximum number of nodes across the batch F_i is the input node feature dimension (=input_dim)
- Returns:
- output tensor (num_classes * B * N * F_o)
F_o is the output node feature dimension (=output_dim)
- Return type:
torch.Tensor