"On how to incorporate space(time) symmetries into your neural networks"
Abstract: Many problems in science, like particle physics, electrodynamics, medical imaging, protein engineering, etc., stay unchanged under transformations of the underlying space or spacetime. Deep neural networks that process data from those fields could benefit in terms of data efficiency, parameter complexity and generalization capabilities, if they already incorporated such space (time) symmetries from the start. In this talk we show how this can be achieved with the help of geometric algebra for big classes of neural network architectures, like variants of multilayer perceptrons, transformers, message passing and convolutional neural networks, etc.