What is the expected number of mutants in a stochastically growing colony once it reaches a given... more What is the expected number of mutants in a stochastically growing colony once it reaches a given size, N? This is a variant of the famous Luria–Delbruck model which studies the distribution of mutants after a given time-lapse. Instead of fixing the time-lapse, we assume that the colony size is a measurable quantity, which is the case in many in-vivo oncological and other applications. We study the mean number of mutants for an arbitrary cell death rate, and give partial results for the variance. For a restricted set of parameters we provide analytical results; we also design a very efficient computational method to calculate the mean, which works for most of the parameter values, and any colony size, no matter how large. We find that a cellular population with a higher death rate will contain a larger number of mutants than a population of equal size with a smaller death rate. Also, a very large population will contain a larger percentage of mutants; that is, irreversible mutations act like a force of selection, even though here the mutants are assumed to have no selective advantage. Finally, we investigate the applicability of the traditional, 'fixed-time' approach and find that it approximates the 'fixed-size' problem whenever stochastic effects are negligible.
We propose a pattern-based approach combined with the concept of Enriched Common Fate Graph for t... more We propose a pattern-based approach combined with the concept of Enriched Common Fate Graph for the problem of classifying Go positions. A kernel function for weighted graphs to compute the similarity between two board positions is proposed and used to learn a support vector machine and address the problem of position evaluation. Numerical simulations are carried out using a set of human played games and show the relevance of our approach.
Go is an ancient board game that poses unique opportunities and challenges for AI and machine lea... more Go is an ancient board game that poses unique opportunities and challenges for AI and machine learning. Here we develop a machine learning approach to Go, and related board games, focusing primarily on the problem of learning a good evaluation function in a scalable way. Scalability is essential at multiple levels, from the library of local tactical patterns, to the integration of patterns across the board, to the size of the board itself. The system we propose is capable of automatically learning the propensity of local patterns from a library of games. Propensity and other local tactical information are fed into a recursive neural network, derived from a Bayesian network architecture. The network integrates local information across the board and produces local outputs that represent local territory ownership probabilities. The aggregation of these probabilities provides an effective strategic evaluation function that is an estimate of the expected area at the end (or at other stages) of the game. Local area targets for training can be derived from datasets of human games. A system trained using only 9 × 9 amateur game data performs surprisingly well on a test set derived from 19 × 19 professional game data. Possible directions for further improvements are briefly discussed.
Go is an ancient board game that poses unique opportunities and challenges for artificial intelli... more Go is an ancient board game that poses unique opportunities and challenges for artificial intelligence. Currently, there are no computer Go programs that can play at the level of a good human player. However, the emergence of large repositories of games is opening the door for new machine learning approaches to address this challenge. Here we develop a machine learning approach to Go, and related board games, focusing primarily on the problem of learning a good evaluation function in a scalable way. Scalability is essential at multiple levels, from the library of local tactical patterns, to the integration of patterns across the board, to the size of the board itself. The system we propose is capable of automatically learning the propensity of local patterns from a library of games. Propensity and other local tactical information are fed into recursive neural networks, derived from a probabilistic Bayesian network architecture. The recursive neural networks in turn integrate local information across the board in all four cardinal directions and produce local outputs that represent local territory ownership probabilities. The aggregation of these probabilities provides an effective strategic evaluation function that is an estimate of the expected area at the end, or at various other stages, of the game. Local area targets for training can be derived from datasets of games played by human players. In this approach, while requiring a learning time proportional to N 4 , skills learned on a board of size N 2 can easily be transferred to boards of other sizes. A system trained using only 9 × 9 amateur game data performs surprisingly well on a test set derived from 19 × 19 professional game data. Possible directions for further improvements are briefly discussed.
Uploads
Papers by Lin Wu