Home / Language / English / MIT-Learning and Intelligent Systems Lab

MIT-Learning and Intelligent Systems Lab

Research interests include behavior learning in very large environments, learning for robot motion planning, visual scene understanding, and transfer learning.

  • Multi-Agent Learning: From Game Theory to Ad-hoc Networks
    In large multiagent situations, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. Mobilized ad-hoc networking can be viewed as such a situation. Using tools from reinforcement learning, game theory, and signal processing, we devise general methods for learning effective behavior policies in multi-agent environemnts. We can then specialize these techniques to create useful methods for learning good routing and movement policies in mobilized ad-hoc networks.

    Our early work focused on understanding the interactions between agents in simple competitive games [4]. An agent’s belief about the complexity and nature of the other players in the game was shown to play an important role in the design of multi-agent learning algorithms.

    In our work on cooperative multi-agent settings, we have focused on situations where each agent may only have a limited knowledge of the global state of the world. They may only be aware of their local surroundings, perhaps due to limited sensor capabilities or limited communication between the agents. Mobile ad-hoc networking is a field where many of these assumptions are true. It is also an area that is growing in importance as sensors and wireless communications become cheap and ubiquitous. Our work applies reinforcement learning techniques to the networking problem, and we show that we can learn effective routing and movement policies for the mobile nodes [3]

    [2] Y. Chang, T. Ho, and L. Kaelbling. All learning is local: Multi-agent learning in global reward games. InAdvances in Neural Information Processing Systems (NIPS) 16, 2004.

    [3] Y. Chang, T. Ho, and L. Kaelbling. Mobilized ad-hoc networks: A reinforcement learning approach. In International Conference on Autonomic Computing (ICAC), 2004.

    [4] Y. Chang and L. Kaelbling. Playing is believing: The role of beliefs in multi-agent learning. In Advances in Neural Information Processing Systems (NIPS) 14, 2002.

  • Learning Subtask Goal to Improve Motion Planning
    ……. another important issue is how to compress our memory so that we don’t run out of space. This could be done by trying to get rid of infrequently used paths, or by finding a way to represent several similar paths by one canonical path. Both of these notions risk removing potentially useful information from our memory, and so must be considered carefully …..

About Mohammad Khazab

Leave a Reply

%d bloggers like this: