Research

Picture: XXXX

In addition to tools for parallel programming, we also develop highly scalable parallel algorithms for selected application areas. An example is simulating the evolution of the neural network in the brain. This network is not hardwired. Even in the mature brain, new connections between neurons are formed and existing ones are deleted, which is called structural plasticity. The dynamics of the connectome is key to understanding how learning, memory, and healing after lesions such as stroke work. However, to predict which neurons connect to each other, a naïve algorithm would compute probabilities for all pairs of neurons, resulting in an effort that increases with the square of the number of neurons. To enable large-scale simulations with millions of neurons and beyond, this quadratic growth is prohibitive. Inspired by hierarchical methods for solving n-body problems in particle physics, we have developed a scalable approximation algorithm for this problem that reduces the complexity to O(n*log n) without any notable impact on the quality of the results. Other recent examples of our algorithmic work include efficient parallel algorithms for space mission planning.