Task Allocation Using Continuous Resource Distributed Markov Decision Processes
Matthew Brown
USC

Exploration of other planets will require teams of robotic rovers to first precede and later assist human explorers. A critical challenge will then be to generate plans for these rovers so as to maximize their productivity while minimizing the time and energy spent to complete a set of tasks. The tasks assigned to an agent may feature temporal constraints as well as complex interdependencies with other tasks, including those assigned to another agent. Numerous techniques have been developed to efficiently compute policies for a single agent using continuous resource Markov Decision Processes (CRMDP). However, expanding such algorithms to situations involving multiple agents is difficult, given that the complexity of solving decentralized Markov Decision Processes (DECMDPs) has been shown to be NEXPcomplete. Even when using approximate algorithms, it is difficult to achieve the scaleup necessary to model the size and complexity of realworld domains using distributed CRMDPs (CRDECMDP). I intend to explore methods for solving CRDECMDPs more efficiently, including (1) fast, locally optimal methods that exploit domain structure; and (2) efficient methods of convolution using fast Fourier transform (FFT).