## Hi, I'm Lance! My goal is to advance our understanding of intelligent systems by modeling cognitive systems and improving artificial systems.

I'm a PhD candidate with Greg Pavliotis and Karl Friston jointly at Imperial College London and UCL, and a student in the Mathematics of Random Systems CDT run by Imperial College London and the University of Oxford. I completed an MRes in Brain Sciences at UCL with Karl Friston and Biswa Sengupta, an MASt in Pure Mathematics at the University of Cambridge with Oscar Randal-Williams, and a BSc in Mathematics at EPFL and the University of Toronto. My work is supported by Luxembourg's AFR Individual PhD grant.

Contact: l.da-costa at imperial.ac.uk

## Collaborate

I am always on the lookout for collaborators in and around my areas of interest and expertise. If this is you, please drop me a quick note.

## Research

My research lies at the intersection of applied mathematics, cognitive science, and machine learning. Here is a snapshot of current projects:

## Probabilistic foundations of machine learning: many problems in machine learning—and elsewhere—can be framed as optimization on spaces of probability distributions. The theory of Markov processes offers generic methods and algorithms to efficiently solve this problem, benefiting key applications. Key papers: [1].

## Bayesian modeling of cognitive systems: the free-energy principle has been proposed as a unifying theory of action, perception, and learning in the brain. We are discovering that this theory can be derived through careful examination of the statistical physics of systems in open exchange with their environment. Key papers: [2,3,4,5].

## Decision-making in autonomous agents: building upon descriptions of decision-making from cognitive science and artificial intelligence, we need to develop scalable methodologies for intelligent and explainable decision-making in artificial agents. Key papers: [6,7,8,9,10,11].

## World model and causal representation learning: the problem of causal representation learning—learning generative models from data—is at the core of cognitive science and artificial intelligence. How do infants learn a model of the world that supports common-sense reasoning and do so rapidly and with limited data? How can we understand and replicate this ability so unique to humans in machines? I am interested in the combination of: AI efforts to solve this problem, descriptions of human world models and human developmental learning, as well as the perspectives offered by optimization on spaces of probability distributions. Key papers: [12,13,14].

For an up-to-date list of publications and references please see my Google Scholar profile.