Ethica provides countless features, each having a high level of flexibility, and you need to make sure the configuration you have set is exactly what you intended. So it's very important that after you finish designing your study in Ethica and before the field deployment, you test the study internally to ensure all components behave as expected.
As a rule of thumb, you should test the following aspects of the study prior to the field deployment:
- Is participation period for each participant correctly configured?
- Have you included all data sources you need for the study? or there are some missing/extra? Also, how the data collected from each data source look like?
- Is the flow of each survey correct? Are skip patterns configured as you intended?
- Have you configured the triggering logic for each survey correctly? Are time-triggered surveys, or proximity-triggered surveys prompted at the right time?
We suggest conducting the test in two phases. First, after completing the design of the first draft of the study, test the study by yourself. This allows you to tweak different components, and test it again right away on your phone. After you are sure the study works as you expect, you can conduct the second phase of testing by inviting your friends, colleagues, and lab mates to join your study as dummy participants. This allows you to mock a complete field deployment and pinpoint any minor adjustment missing from your first round of test.
At any time during your testing, you can fully wipe your study. This operation will delete the following:
- All registered participants. Note that participants' account will not be deleted, only their registration in your study will be removed.
- All data collected from all participants.
- All operation and audit logs collected for the study.
This allows you to start from a clean state and run the test again. It's important to note that this operation is irrevocable, and you cannot "restore" the deleted data anymore. Because of this, Ethica's user interface does not allow any option to wipe the study. You can contact one of our staff at any time to reset the study.
As a researcher, you probably have a Researcher account with Ethica which you use to create and manage your study. Ethica does not allow Researcher account holders to join a study as a participant. Therefore, you cannot use your researcher account to test your study.
What you can do is to create another account as a Participant, using a different email address, and use that to test anything that requires a participant account. Ideally you use your professional email address for your Researcher account, and your personal email address for your Participant account. You may also decide to use email aliases, such as + postfix instead.
Note that it's not possible in Ethica to convert an account from Researcher to Participant, or vice versa. If you need to convert your account, you need to contact Ethica staff to delete your account, and then use that email address for your new account.
In the previous section, we discussed how you can set
Study Period and
Participation Duration to instruct Ethica how long each participant should be part of the study. If you want to make sure the values you have chosen for these two parameters are correct, use your Participant account to register in your study, and then check out the participation start and end time that Ethica has assigned to you.
To do so, in your Researcher Dashboard select your study and navigate to the
Participation -> Adherence page:
Here you will see there is only one participant registered in our study, which will be your Participant account. The
Joined In and
Last Day fields respectively show when is the start and end date of the participation period for this individual. In the above example, the participant will start the study on Jun. 7th, 13:56, and will be part of the study until Jun. 14th, 13:56, which is 7 days.
All survey sessions and sensor-based data collection from this participant will happen within this period, and no data will be collected before or after this time window.
While collecting data from the data sources you have chosen is done fully automatically, it's important to test a few things about them before the field deployment.
First, you should make yourself familiar with the setup and permission requirements for each of the data sources. Some data sources work without any initial actions from the participants, but others require the participant to give explicit permission to Ethica to access the data. For example, if your study requires GPS, the participant will need to explicitly allow Ethica to access their location via GPS before Ethica can collect this data. Some other data sources may require the participant to install another Ethica application on their phone in order for the data to be collected.
The good news is that participants have to do these actions only once, right after they join your study (well, maybe a few times if things get reset on their device). Ethica will show them a set of notifications on the app guiding them what they should do:
If you join your study as a mock participant, you will be able to check out all these notifications and the steps to be taken for each of them, so you can better guide your participants during the enrollment session, in case they have any questions.
Second, you need to check the data collected from each data source, and how they look like. To do so, you need to remain in the study for a few days so enough data is being collected. Then, make sure all your data from your participant account is uploaded successfully to Ethica servers (check here how). Then you can go to your Researcher Dashboard, and check out the data collected from your participant account (more on that here).
Ethica Survey Editor offers a Survey Preview which simulates how your survey will work on the participants' devices. You can always use the Survey Preview to quickly test your survey flow:
While Survey Preview gives you an overall feeling about your survey flow, it's important to test the flow on a smartphone. If possible, we suggest to test it on an Android and an iPhone, preferably with different screen sizes. This way you can make sure the content you have put in each page will be presented properly on different devices.
The challenge is that if your survey is set to be prompted at a certain time, or based on proximity, or other contextual triggering logic, when you join the study as a participant, you have to wait for a while before the Ethica app prompts your survey in order for you to test it out. This gets even worse when you have multiple surveys, each with different criteria, where the criteria impact the triggering logic. Testing this can easily get very complex.
As at this stage our focus is on survey flow, we can abstract the triggering logic, and just check the flow. To do so, you can simply add a
User Triggered triggering logic to each of your surveys. This way, you instruct Ethica to add a button to the app's home screen for each of your surveys. Using the survey's triggering button, you can initiate the survey as many times as needed. The following image shows how this study has configured its
Time Triggered surveys to be also
User Triggered, to simplify testing the survey flow:
This way, you can launch each survey as many times as needed, even surveys which are supposed to be prompted at certain conditions, like the
Eligibility survey or
Time Triggered surveys.
Note that in this case, every time you make a change to the survey via your Researcher Dashboard, you should update the study on your Participant account as well (i.e. on the device you are performing the test). This will be fairly straightforward as at this stage your study has only one participant -- yourself. Here you can read more on how to update participants' device after study modification.
When you have tested the flow of the survey and are happy with how questions are presented and how the skip patterns and branchings work, you can remove all
User Triggered triggering logics you added previously for testing.
Another important part of testing your surveys is to make sure each survey is triggered exactly at the time you intend to. Ethica offers different triggering logic you can use throughout your study. Here we discuss how you can test each type.
Eligibility triggering logic is very simple. A survey with
Eligibility triggering logic is supposed to be prompted before a participant joins your study. So all you have to do is to try to join your study using your Participant account, and you should see your eligibility survey before the registration page is shown.
Drop out triggering logic is also straightforward. A survey with
Drop out triggering logic is presented when the participant decides to drop out of your study. As a participant, you can open Ethica app on your smartphone and try to drop out of your study. At this stage, you should be presented with your study's
Drop out survey.
User Triggered triggering logic are also simple to test. Ethica should present a button on the user interface of the app, which tapping on it should open the intended survey.
When you assign a
Time-Triggered triggering logic to a survey, you instruct Ethica to notify participants based on a certain schedule and ask them to complete the survey. In this case, it's very important to make sure the defined schedule is exactly what you had in mind. Such a schedule can be as simple as every day at 9 am, or as complex as if the participant is female, on Mon., Wed., and Fri. between 6 to 7 pm.
When a new participant joins your study, for each survey with
Time-Triggered triggering logic, Ethica generates the time-table for survey prompts and uses that to prompt the survey. You can check that time table right after the participant registers in your study, to ensure the generated survey prompt times are what you expect.
To do so, from your Researcher Dashboard select your study and navigate to the
Participation -> Survey Sessions page. There, you can select one or more participant, and one or more survey, and check all sessions for that survey. If a session was expected to be prompted in the past, Ethica will also display whether the participant responded to it, or she canceled it, or the survey was expired. If the survey's session is in the future, you can check when the survey is expected to be prompted.
The following image shows our participant account registered in our study. We defined one survey for the study, which will be prompted 3 days a week, on Mondays, Wednesdays, and Fridays, between 9 am and 11:30 am. You can see the time table below shows the first survey session will be prompted on Mar. 4th at 9:07 am, the second session will be on Mar. 6th, followed by Mar. 8th, 11th, and so on.
Every time you modify your survey, Ethica increases the version number of your survey, and marks all previous versions as invalid. Therefore, any participant who joined your study before the modification will receive the update upon their next interaction with the server. During the update, Ethica will delete all future sessions for the old version, and generate new sessions for the participant based on the new version of the survey.
You can define surveys in Ethica that are prompted based on participant's proximity to another participant, or an object or a place in the physical world. To use this survey your study should be configured for proximity monitoring via Bluetooth Beacons, as discussed here. When you configure your study for Beacon-based proximity monitoring, you will specify what are the teams, roles, and subjects in your study. Later on, you will use these information to configure your survey's
Proximity Trigger triggering logic.
Assuming you have setup that part successfully, testing your survey prompt involves configuring your beacons and placing them where appropriate, and then use your phone logged in with your participant account to come into proximity of these beacons and leave their proximity. If the phone detects a long-enough proximity session with the beacon, you will receive a survey on your phone.
We suggest you check out this article on how you can investigate exactly why a given
Proximity-Trigger survey was prompted or was not prompted at every given occasion.