Collective behavior in Bayesian agents

SECOND CALL
APPLICATION ID: ALL7 

What we are looking for:

We are looking for forward-thinking research proposals that not only explore the frontier of Bayesian methods for collective behavior but also investigate how these methods can be synergistically combined with other advanced computational techniques

 

Madrid, Spain

Madrid, Spain

The context:

 

Multiagent Bayesian systems represent a fascinating and rapidly evolving area of research in AI, with wide-ranging applications and deep theoretical foundations. The interplay of Bayesian reasoning, strategic interaction, and learning in these systems offers rich opportunities for innovation and discovery, pushing the boundaries of what intelligent systems can achieve in complex, uncertain environments.

The problem to address:

 

David Rios Insua from ICMAT and Gonzalo de Polavieja from CIC are collaborating to investigate the collective behavior of Bayesian agents in both competitive and cooperative settings. We are particularly focused on the integration with multiagent reinforcement learning and LLMs, aiming to develop more sophisticated and nuanced models of agent interaction that more closely mimic complex decision-making processes akin to human reasoning and negotiation, as well as analysing the evolution from cooperation to competition and viceversa.
The results hold significant potential for a broad spectrum of applications and are supported by various industry partners, including Aeroengy, TheBasement, and Algebraic AI, as well as international academic collaborators from institutions like CNR-IMATI, GWU, Aalto and the Champalimaud Foundation.

Objectives:

  • Develop integrative models combining Bayesian reasoning with other computational methods like reinforcement learning and LLMs to facilitate enhanced communication and strategy formulation among agents.
  • Examine the implications of such integrated approaches in practical scenarios, ranging from collaborative robotics to interactive decision support systems.

 

Qualifications:

  • Indispensable: PhD in AI/ML, Decision Sciences (including Decision Analysis and Game Theory)

  • Experience in Bayesian methods

  • Experience in core topics in Game Theory, Negotiations, Adversarial Risk Analysis

  • Experience in Python, Github, HPC, Probabilistic programming languages

 

Expected Outcomes:

  • Introduction of new frameworks and applications that incorporate multiagent Bayesian reasoning into reinforcement learning and LLMs.
  • Contributions to the body of knowledge in Bayesian methods and collective behavior.

 

Utilizamos cookies en este sitio para mejorar su experiencia de usuario. Más información. ACEPTAR

Aviso de cookies