The neurons of an artificial neural network are connected by weights. When the neural network is fed with input data, the activation of each neuron is calculated by multiplying the activations of the antecedent neurons with the weight connecting them. During training, these weights are adjusted such that the network optimally fits the input data distribution. This adjustment is made computationally by propagating the output error back through all layers of the network.
The goal of this project is to introduce fixed, artificial weights into the network. These weights are deliberately chosen to induce specific properties in some parts of the network. Such a human-made filter could, for example, be used to force feature-importance ranking in one particular layer.
This could answer questions like
- Do specific features contain more information than others (e.g., spatial position in image, color, patterns)?
- Can a network be forced to learn a ranking of such features?
- Can the network be forced to compensate for known patterns in the data?
During training, the weight adjustment in neural networks is made automatically by the optimizer. While the network learns the distribution in the training dataset, the value distribution in hidden neurons typically do not reveal any useful information to a human. This makes activations in the network uninterpretable and uncontrollable.
- Get familiar with TensorFlow / PyTorch
- Define use-cases, where static filters could bring advantages
- Experiment with static filters in common network architectures
- Evaluate approach in terms of
- Programming skills in Python
(preferably also with Pytorch or Tensorflow)
- Basic knowledge of neural networks
- Scope: Bachelor/Master
- Duration: 6 Month Project, 3 Month Thesis (Bachelor) / 6 Month Thesis (Master)
- Start: immediately