MTurk is a platform on which Workers (or "Turkers") can complete Human Intelligence Tasks (HITs) for monetary compensation. HITs are put up by Requesters (that's us!). MTurk originally was set up to do large-scale tasks that require human intelligence (e.g. labeling photos, finding telephone numbers, etc), but has also been used by social scientists to conduct (large-scale) online experiments.
In this class, all MTurk projects will be launched using a common class account. This simplifies funding logistics. We will be giving more detailed instructions to teams directly on how to access the class account and launch HITs etc.
In order to create MTurk tasks, you will need a requester account. For this course, you should create and use your own requester account to debug your task. When you are ready to deploy to MTurk to collect actual data, we will give you credentials for the course Requester account.
Some Turkers multitask, and most of them do HITs for many hours a day, so appropriate designs would include attention checks, manipulation checks, etc.
Note that Amazon workerIDs are actually tied to their (public) Amazon account, and thus constitute identifiable information. Thus, you should anonymize data by redacting workerIDs (and other identifiable information).
Do all analyses on anonymized data. (This is to prevent cases where others are unable to reproduce your analyses because it might rely somehow on identifiable information. If you start with anonymized data, your analyses would never use any of this information.)
If your study can be run as a simple survey, we recommend that you collect data via Qualtrics. You can then deploy it on MTurk using TurkPrime by following the instructions in this TurkPrime guide.
TurkPrime has paid features that allow you to batch your tasks and prevent Turkers from taking multiple assignments from HITs within the same group (e.g., different conditions of the same study). These features are also explained int he TurkPrime guide. Speak to your TA if you think you need thes features.
If your study is more complex, you may need to create a web experiment using HTML, CSS, and Javascript. If that's the case, we recommend you find a similar study and then modify it until it does what you want. We recommend you follow this tutorial to find example tasks and tips on web page structure and inspection.
You can then post your task directly on MTurk, via TurkPrime , are by using nosub (a command line tools for deploying a study).
For the purpose of this course, we recommend hosting your task in your Stanford AFS space.
It is our ethical obligation to inform participants about a task to help them decide if they'd like to participate. For the sake of this course, please use the content below at the start and end of your task, respectively.
Include this on the experiment frontpage. By answering the following questions, you are participating in a class project being performed by students in the Stanford Computer Science Department. If you have questions about this project, please contact us at [email protected]. Your participation in this project is voluntary and your anonymity is assured; the researchers who have requested your participation will not receive any personal information about you. We have recently been made aware that your public Amazon.com profile can be accessed via your worker ID if you do not choose to opt out. If you would like to opt out of this feature, you may follow instructions available here
Include this at the end of the experiment. Please include a short debriefing in your experiment, thanking the participant, explaining in 2-4 lines what your study was about, and asking them not to share this information with other potential participants.
In this class, we pay all participants fair wages at $15/hour. That means that if a study takes an hour, we pay $15. If it takes a half hour, we pay $7.50. And so on. Each team has a MTurk budget of $100 (which must account for the pilot budget and MTurk/TurkPrime fees). If you run your task in a clever way (in batches of 9 or less to avoid extra fees that means you have about 5 hours and 20 minutes worth of person-hours to dedicate toward piloting and completion of this task. See the sections above to learn how TurkPrime may help decrease the additional fee costs of running more than 9 participanta.
If you are concerned your time estimate might be off, consider using the Stanford Fair Work tool. This is a single line JavaScript link you can dd to your task that will ensure your workers are paid at leats minimum wage.
The first step in piloting should be a pilot with non-naive participants. The goals here are to 1) collect "data" from you/your friends to ensure you are logging data correctly, 2) get feedback on the task by running it several times, 3) code your planned analyses to ensure you can run them on your data.
If you are using cosub and JavaScript, the MTurk sandbox will help with this step. The MTurk sandbox is a simulated environment that let's you test and debug your tasks. Does your task appear as you'd expect? Are you collecting the data you'd expect? You can post your task in the MTurk sandbox from your requester account. You can then log into the worker sandbox to test that task using your requester credentials. If you are using cosub to deploy your task (directions on this page, it will default to the Sandbox unless you use a special flag. If you are using TurkPrime, follow the instructions in the guide for switching to a sandbox account.
For this stage of piloting, you will need to create your own AWS account and link it to your requester account so you have a place you can store data. Instructions for setting up and linking this account can be found here.
Once you have convinced yourself that you are ready to pilot your task on MTurk (with actual Turkers as participants), email your TA with 1) a link to your paradigm, 2) data from your pilot 1 sandbox testing, 3) an analysis script that run on that data, and 4) calculated cost for both MTurk and Turkprime. If your TA approves approve your task, they will send you instructions for the pilot, the MTurk and TurkPrime credentials, and the class gmail login so you can monitor data collection.
When you are running both pilot 2 and your actual study you must be actively monitoring this gmail using the provided credentials. If you get complaints about the study, please address them courteously and quickly (ideally, within a few hours). Turkers can be very helpful if you are responsive. Always assure them that they will be paid for their work.
You need approval from Michael or your TA (send an email to both including final task link) before you can collect final data. Collecting this data should follow the same procedure as pilot B just with more participants.