Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/SansSerif/Regular/Main.js
-: Anonymous Verifiable Crowdsourcing With Worker Qualities | IEEE Journals & Magazine | IEEE Xplore

\mathsf {AVeCQ}AVeCQ: Anonymous Verifiable Crowdsourcing With Worker Qualities


Abstract:

In crowdsourcing systems, requesters publish tasks, and interested workers provide answers to get rewards. Worker anonymity motivates participation since it protects thei...Show More

Abstract:

In crowdsourcing systems, requesters publish tasks, and interested workers provide answers to get rewards. Worker anonymity motivates participation since it protects their privacy. Anonymity with unlinkability is an enhanced version of anonymity because it makes it impossible to “link” workers across the tasks they participate in. Another core feature of crowdsourcing systems is worker quality which expresses a worker's trustworthiness and quantifies their historical performance. In this work, we present \mathsf {AVeCQ}, the first crowdsourcing system that reconciles these properties, achieving enhanced anonymity and verifiable worker quality updates. \mathsf {AVeCQ} relies on a suite of cryptographic tools, such as zero-knowledge proofs, to (i) guarantee workers’ privacy, (ii) prove the correctness of worker quality scores and task answers, and (iii) commensurate payments. \mathsf {AVeCQ} is developed modularly, where requesters and workers communicate over a platform that supports pseudonymity, information logging, and payments. To compare \mathsf {AVeCQ} with the state-of-the-art, we prototype it over Ethereum. \mathsf {AVeCQ} outperforms the state-of-the-art in three popular crowdsourcing tasks (image annotation, average review, and Gallup polls). E.g., for an Average Review task with 5 choices and 128 workers \mathsf {AVeCQ} is 40% faster (including computing and verifying necessary proofs, and blockchain transaction processing overheads) with the task's requester consuming 87% fewer gas.
Published in: IEEE Transactions on Dependable and Secure Computing ( Volume: 22, Issue: 1, Jan.-Feb. 2025)
Page(s): 406 - 423
Date of Publication: 15 May 2024

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Crowdsourcing is the process of gathering information regarding a task (e.g., a query or project) by leveraging agents who are incentivized to work on them within a specific time frame [1]. A prominent example of crowdsourcing revolves around Human Intelligence Tasks (HITs), which can be used to enrich datasets designed for empowering machine learning models. Those who request crowdsourcing tasks can extract statistical data, form conclusions, and even monetize from any results based on the individually-provided answers [2].

Select All
1.
J. Howe et al., "The rise of crowdsourcing", Wired Mag., vol. 14, no. 6, pp. 1-4, 2006.
2.
A. Marcus, D. Karger, S. Madden, R. Miller and S. Oh, "Counting with the crowd", Proc. VLDB Endowment, vol. 6, no. 2, pp. 109-120, 2012.
3.
Y. Zheng, G. Li, Y. Li, C. Shan and R. Cheng, "Truth inference in crowdsourcing: Is the problem solved?", VLDB Endowment, vol. 10, no. 5, pp. 541-552, 2017.
4.
H. Sun, B. Dong, H. Wang, T. Yu and Z. Qin, "Truth inference on sparse crowdsourcing data with local differential privacy", Proc. IEEE Int. Conf. Big Data, pp. 488-497, 2018.
5.
P. Welinder, S. Branson, P. Perona and S. Belongie, "The multidimensional wisdom of crowds", Proc. Int. Conf. Neural Inf. Process. Syst., pp. 2424-2432, 2010.
6.
R. He and J. McAuley, "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering", Proc. World Wide Web Conf., pp. 507-517, 2016.
7.
E. Krivosheev, S. Bykau, F. Casati and S. Prabhakar, "Detecting and preventing confused labels in crowdsourced data", VLDB Endowment, vol. 13, no. 12, pp. 2522-2535, 2020.
8.
N. B. Shah and D. Zhou, "Double or nothing: Multiplicative incentive mechanisms for crowdsourcing", Proc. Int. Conf. Neural Inf. Process. Syst., pp. 1-9, 2015.
9.
V. Pérez, C. Aybar and J. M. Pavía, "Dataset of the COVID-19 lockdown survey conducted by GIPEyOP in spain", Data Brief, vol. 40, 2022.
10.
J. Wang, J. Tang, D. Yang, E. Wang and G. Xue, "Quality-aware and fine-grained incentive mechanisms for mobile crowdsensing", Proc. IEEE Int. Conf. Distrib. Comput. Syst., pp. 354-363, 2016.
11.
D. Oleson, A. Sorokin, G. Laughlin, V. Hester, J. Le and L. Biewald, "Programmatic gold: Targeted and scalable quality assurance in crowdsourcing", Proc. 11th AAAI Conf. Hum. Computation, pp. 43-48, 2011.
12.
K. Emery, T. Sallee and Q. Han, "Worker selection for reliably crowdsourcing location-dependent tasks" in Mobile Computing Applications and Services, Berlin, Germany:Springer, 2015.
13.
"Amazon mechanical turk", [online] Available: mturk.com.
14.
"Microwork", [online] Available: microwork.app.
15.
"Qmarkers: Collective intelligent soltuions", [online] Available: qmarkets.net.
16.
E. Peer, J. Vosgerau and A. Acquisti, "Reputation as a sufficient condition for data quality on amazon mechanical turk", Behav. Res. Methods, vol. 46, pp. 1023-1031, 2014.
17.
Y. Tang, S. Tasnim, N. Pissinou, S. S. Iyengar and A. Shahid, "Reputation-aware data fusion and malicious participant detection in mobile crowdsensing", Proc. IEEE Int. Conf. Big Data, pp. 4820-4828, 2018.
18.
M. H. Moti, D. Chatzopoulos, P. Hui and S. Gujar, "Farm: Fair reward mechanism for information aggregation in spontaneous localized settings", Proc. Int. Joint Conf. Artif. Intell., pp. 506-512, 2019.
19.
T. Kandappu, A. Friedman, V. Sivaraman and R. Boreli, "Privacy in crowdsourced platforms" in Privacy in a Digital Networked World, Berlin, Germany:Springer, pp. 57-84, 2015.
20.
Y. Wang, Z. Cai, G. Yin, Y. Gao, X. Tong and G. Wu, "An incentive mechanism with privacy protection in mobile crowdsourcing systems", Comput. Netw., vol. 102, pp. 157-171, 2016.
21.
L. Wang, G. Qin, D. Yang, X. Han and X. Ma, "Geographic differential privacy for mobile crowd coverage maximization", Proc. AAAI Conf. Artif. Intell., 2018.
22.
Ú. Erlingsson, V. Pihur and A. Korolova, "RAPPOR: Randomized aggregatable privacy-preserving ordinal response", Proc. ACM Asia Conf. Comput. Commun. Secur., pp. 1054-1067, 2014.
23.
L. Wang, D. Zhang, D. Yang, B. Y. Lim and X. Ma, "Differential location privacy for sparse mobile crowdsensing", Proc. IEEE 16th Int. Conf. Data Mining, pp. 1257-1262, 2016.
24.
M. Huai, D. Wang, C. Miao, J. Xu and A. Zhang, "Privacy-aware synthesizing for crowdsourced data", Proc. 28th Int. Joint Conf. Artif. Intell., pp. 2542-2548, 2019.
25.
Y. Lu, Q. Tang and G. Wang, "Zebralancer: Private and anonymous crowdsourcing system atop open blockchain", Proc. IEEE 38th Int. Conf. Distrib. Comput. Syst., pp. 853-865, 2018.
26.
H. To, G. Ghinita and C. Shahabi, "A framework for protecting worker location privacy in spatial crowdsourcing", Proc. VLDB Endowment, vol. 7, no. 10, pp. 919-930, Jun. 2014.
27.
N. Salehi, L. C. Irani, M. S. Bernstein, A. Alkhatib, E. Ogbe and K. Milland, "We are dynamo: Overcoming stalling and friction in collective action for crowd workers", Proc. 33rd Annu. ACM Conf. Hum. Factors Comput. Syst., pp. 1621-1630, 2015.
28.
M. Li et al., "CrowdBC: A blockchain-based decentralized framework for crowdsourcing", IEEE Trans. Parallel Distrib. Syst., vol. 30, no. 6, pp. 1251-1266, Jun. 2019.
29.
Q. Li and G. Cao, "Providing efficient privacy-aware incentives for mobile sensing", Proc. IEEE 34th Int. Conf. Distrib. Comput. Syst., pp. 208-217, 2014.
30.
X. Yan et al., "Verifiable reliable and privacy-preserving data aggregation in fog-assisted mobile crowdsensing", IEEE Internet Things J., vol. 8, no. 18, pp. 14127-14140, Sep. 2021.
Contact IEEE to Subscribe

References

References is not available for this document.