Academia.eduAcademia.edu

Outline

Incentivizing an Unknown Crowd

2021, ArXiv

Abstract

Motivated by the common strategic activities in crowdsourcing labeling, we study the problem of sequential eliciting information without verification (EIWV) for workers with a heterogeneous and unknown crowd. We propose a reinforcement learning-based approach that is effective against a wide range of settings including potential irrationality and collusion among workers. With the aid of a costly oracle and the inference method, our approach dynamically decides the oracle calls and gains robustness even under the presence of frequent collusion activities. Extensive experiments show the advantage of our approach. Our results also present the first comprehensive experiments of EIWV on large-scale real datasets and the first thorough study of the effects of environmental variables.

References (34)

  1. Blitzer, J.; Dredze, M.; and Pereira, F. 2007. Biogra- phies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of com- putational linguistics, 440-447.
  2. Checco, A.; Bates, J.; and Demartini, G. 2019. Quality control attack schemes in crowdsourcing. In IJCAI In- ternational Joint Conference on Artificial Intelligence, volume 2019, 6136-6140. AAAI Press.
  3. Cruz Jr, J. 1975. Survey of nash and stackelberg equi- librim strategies in dynamic games. In Annals of Eco- nomic and Social Measurement, Volume 4, number 2. NBER. 339-344.
  4. Dasgupta, A., and Ghosh, A. 2013. Crowdsourced judgement elicitation with endogenous proficiency. In Proceedings of the 22nd International Conference on World Wide Web, 319-330.
  5. Difallah, D. E.; Catasta, M.; Demartini, G.; Ipeirotis, P. G.; and Cudré-Mauroux, P. 2015. The dynamics of micro-task crowdsourcing: The case of amazon mturk. In Proceedings of the 24th International Conference on World Wide Web, 238-247.
  6. Haarnoja, T.; Zhou, A.; Abbeel, P.; and Levine, S. 2018. Soft actor-critic: Off-policy maximum entropy deep re- inforcement learning with a stochastic actor. In Inter- national Conference on Machine Learning, 1861-1870. PMLR.
  7. Han, Q.; Ruan, S.; Kong, Y.; Liu, A.; Mohsin, F.; and Xia, L. 2021. Truthful information elicitation from hy- brid crowds. arXiv preprint arXiv:2107.10119.
  8. Ho, C.-J., and Vaughan, J. 2012. Online task assignment in crowdsourcing markets. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 26.
  9. Ho, C.-J.; Jabbari, S.; and Vaughan, J. W. 2013. Adap- tive task assignment for crowdsourced classification. In Dasgupta, S., and McAllester, D., eds., Proceedings of the 30th International Conference on Machine Learn- ing, volume 28 of Proceedings of Machine Learning Research, 534-542. Atlanta, Georgia, USA: PMLR.
  10. Howe, J., et al. 2006. The rise of crowdsourcing. Wired Magazine 14(6):1-4.
  11. Hu, Z.; Liang, Y.; Zhang, J.; Li, Z.; and Liu, Y. 2018. Inference aided reinforcement learning for incentive mechanism design in crowdsourcing. Advances in Neu- ral Information Processing Systems 31:5507-5517.
  12. Konda, V. R., and Tsitsiklis, J. N. 2000. Actor-critic al- gorithms. In Advances in Neural Information Process- ing Systems, 1008-1014.
  13. Kong, Y., and Schoenebeck, G. 2019. An information theoretic framework for designing information elicita- tion mechanisms that reward truth-telling. ACM Trans- action Economics Computation 7(1).
  14. Liu, Y., and Chen, Y. 2016. Learning to incentivize: Eliciting effort via output agreement. In Proceedings of the International Joint Conference on Artificial In- telligence. International Joint Conferences on Artificial Intelligence.
  15. Liu, Y., and Chen, Y. 2017. Sequential peer prediction: Learning to elicit effort using posted prices. In Thirty- First AAAI Conference on Artificial Intelligence. Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learn- ing. In International Conference on Machine Learning, 1928-1937. PMLR.
  16. Oleson, D.; Sorokin, A.; Laughlin, G.; Hester, V.; Le, J.; and Biewald, L. 2011. Programmatic gold: Tar- geted and scalable quality assurance in crowdsourcing. In Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence.
  17. Schaul, T.; Quan, J.; Antonoglou, I.; and Silver, D. 2016. Prioritized experience replay. In International Conference on Learning Representations. Schoenebeck, G.; Yu, F.-Y.; and Zhang, Y. 2021. In- formation elicitation from rowdy crowds. In Proceed- ings of the Web Conference, 3974-3986. New York, NY, USA: Association for Computing Machinery.
  18. Shah, N. B., and Zhou, D. 2015. Double or nothing: Multiplicative incentive mechanisms for crowdsourc- ing. Advances in Neural Information Processing Sys- tems 28:1-9.
  19. Shah, N., and Zhou, D. 2016. No oops, you won't do it again: Mechanisms for self-correction in crowd- sourcing. In Balcan, M. F., and Weinberger, K. Q., eds., Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Ma- chine Learning Research, 1-10. New York, New York, USA: PMLR.
  20. Shah, N. B., and Zhou, D. 2020. Approval voting and incentives in crowdsourcing. ACM Transaction on Eco- nomics Computation 8(3).
  21. Shah, N.; Zhou, D.; and Peres, Y. 2015. Approval voting and incentives in crowdsourcing. In Proceedings of the 32nd International Conference on Machine Learning, 10-19. Lille, France: PMLR.
  22. Shnayder, V.; Agarwal, A.; Frongillo, R.; and Parkes, D. C. 2016a. Informed truthfulness in multi-task peer prediction. In Proceedings of the 2016 ACM Conference on Economics and Computation, EC '16, 179-196. New York, NY, USA: Association for Computing Ma- chinery.
  23. Shnayder, V.; Agarwal, A.; Frongillo, R.; and Parkes, D. C. 2016b. Informed truthfulness in multi-task peer prediction. In Proceedings of the 2016 ACM Conference on Economics and Computation, 179-196.
  24. Simpson, E. D.; Venanzi, M.; Reece, S.; Kohli, P.; Guiver, J.; Roberts, S. J.; and Jennings, N. R. 2015. Language understanding in the wild: Combining crowd- sourcing and machine learning. In Proceedings of the 24th International Conference on World Wide Web, 992-1002.
  25. Venanzi, M.; Teacy, W.; Rogers, A.; and Jennings, N. 2015a. Sentiment popularity -amazon mechan- ical turk dataset. Supports: Venanzi, Matteo, Teacy, W.T.L., Rogers, Alex and Jennings, Nicholas R. (2015) Bayesian modelling of community-based multidimen- sional trust in participatory sensing under data sparsity.
  26. In, International Joint Conference on Artificial Intelli- gence (IJCAI-15), Buenos Aires, AR, 25 -31 Jul 2015. 8pp, 717-724.
  27. Venanzi, M.; Teacy, W.; Rogers, A.; and Jennings, N. 2015b. Weather sentiment -amazon mechanical turk dataset.
  28. Welinder, P.; Branson, S.; Belongie, S.; and Perona, P. 2010. The Multidimensional Wisdom of Crowds. In Advances in Neural Information Processing Systems.
  29. Witkowski, J., and Parkes, D. C. 2012. Peer predic- tion without a common prior. In Proceedings of the 13th ACM Conference on Electronic Commerce, EC '12, 964-981. New York, NY, USA: Association for Computing Machinery.
  30. Witkowski, J., and Parkes, D. C. 2013. Learning the prior in minimal peer prediction. In Proceedings of the 3rd Workshop on Social Computing and User Gen- erated Content at the ACM Conference on Electronic Commerce, volume 14.
  31. Witkowski, J.; Bachrach, Y.; Key, P.; and Parkes, D. 2013. Dwelling on the negative: Incentivizing effort in peer prediction. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 1.
  32. Yang, P.; Cai, H.; and Zheng, Z. 2018. Improving the quality of crowdsourcing labels by combination of golden data and incentive. In 2018 12th IEEE Interna- tional Conference on Anti-counterfeiting, Security, and Identification (ASID), 10-15.
  33. Zhao, Y.; Zheng, K.; Yin, H.; Liu, G.; Fang, J.; and Zhou, X. 2020. Preference-aware task assignment in spatial crowdsourcing: from individuals to groups. IEEE Transactions on Knowledge and Data Engineer- ing.
  34. Zheng, Y.; Li, G.; Li, Y.; Shan, C.; and Cheng, R. 2017. Truth inference in crowdsourcing: Is the prob- lem solved? Proceedings of the VLDB Endowment 10(5):541-552.