Compared to mEMA
First Impression: Learning Curve
Learning the basics of mEMA is straightforward, but it often shifts the burden of complexity onto participants or researchers. Careful attention to documentation is crucial to avoid making any kind of irreversible mistake. For instance:
-
Lacking a survey versioning feature has enforced them to duplicate text in many pages of their documentation against modifying the survey after it has been used to collect data. They constantly warn that this will break all the already collected responses.
-
Analytics code which is basically a variable name for each question should be unique, otherwise, one will overwrite the other in the responses data table. Although this code is generated based on a label you assign to each question, the system doesn’t automatically check for uniqueness, potentially causing confusion and errors during data analysis.
-
Question labels and analytics codes must be 40 characters or fewer; exceeding this limit prevents participants from uploading responses.
-
GPS permissions must be granted when the app is installed and cannot be granted later. Participants must reinstall the app to enable GPS permissions if they were not granted initially.
Despite the importance of these constraints, the system does not enforce it, nor it helps the user via the user interface to prevent such scenarios, which can lead to data collection errors.
Lack of organization
mEMA manages participants by dividing them into groups rather than organizing everything under a study. While this flexibility may be advantageous, it can also complicate things. For instance, when a new participant is added to a group, group's settings such as surveys are not automatically applied to the participant. You must manually assign surveys and activities to each new participant.
It's strange that you can't create a new group by yourself. Instead, you have to submit a request to the support team to create new groups for you. This is very inconvenient for such a core functionality. This can be a bottleneck in your study setup process.
Since the system is not designed around studies, you need to manually assign the surveys to certain participants or groups. This can be a tedious and error-prone task, especially with a large number of participants and surveys (similar shortcoming as m-Path).
Registration and Login
To avoid storing personally identifiable information (PII), mEMA does not use usernames and passwords. Instead, participants log in using a unique code. While this enhances privacy, it prevents the creation of public studies, as each participant must be predefined and provided with their unique code.
Outdated UI/UX
Founded in 2011, mEMA’s UI/UX has seen little change over the years. Although the system is feature-rich, navigating and learning to use these features can be challenging due to their complex and poor design. For example, enabling larger image sizes in a survey requires appending “_LARGEIMG” to the question’s analytics code. Similarly, adding “_READ” allows participants to read text while recording audio responses. These features are easily missed without thorough documentation review.
Empty Promises
mEMA’s documentation does not always support its claims and features. For instance, their pricing page lists support for various data sources, but many, such as humidity, step count, and accelerometer, are deprecated.
Several features marked as “Unique to mEMA” are, in fact, standard in many EMA systems. Examples include using images as questions/responses and Garmin integration.
mEMA Strength
mEMA has many great features when it comes to having an action based on the collected data, whether in the form of a survey or sensor data. Two standout features include:
-
The ability to send JITAI (Just-In-Time Adaptive Interventions) based on the participant's physiological data collected via their Garmin devices. You can tailor the intervention to prompt a survey to the participant in case of any significant change in their physiological data.
-
High-risk responses can be flagged and sent to the researcher in real time. This can be useful in studies that require immediate intervention in case of any high-risk response.
Feature Category Comparison
The following table compares Avicenna and mEMA on different categories of features:
Category | Superior |
---|---|
Study Setup & Deployment | |
mEMA is not designed as a study-based software, so it is no wonder it lacks the basic features in this area. Additionally, mEMA lacks audit logs, consent forms, screening, televisit, etc. | |
Notifications | |
mEMA supports in-app notifications and some email notifications for high-risk responses. However, you can not customize those notifications or create your templates. Moreover, you can not view the sent notifications and their status. | |
Participant Activities | Tie |
While mEMA offers a scripting feature, its time-based triggering logic has less flexibility. The sensor-based triggering logic gives mEMA the edge. However, session management is more feature-rich and easier to use in Avicenna. Therefore, it's fair to say that there's a tie in this category. | |
Survey | |
Avicenna wins this one. mEMA has some of the basic functionality such as the common question types and general survey capabilities. However, it lacks for example researcher-responded surveys or public surveys. | |
Interventions & Cognitive Tasks | |
While Avicenna offers more features in this category, the difference between the two platform is not considerable. | |
Gamification & Rewarding | Tie |
Avicenna and mEMA both fail in this category because they lack any related features. | |
Software Security & Reliability | |
mEMA falls short in this category as it lacks a convenient way to manage roles and permissions or advanced security settings for user profiles. mEMA is not GDPR compliant, which can be a significant barrier for researchers in Europe. | |
Sensor Data & Wearables | |
mEMA only supports Garmin devices. They have deprecated lots of phone-based data sources. As a result, Avicenna is the winner here since it supports Garmin, Fitbit, Polar, and many other phone-based data sources. | |
Participant app | |
mEMA has native Android and iOS apps that can work offline, but, it does not allow participants to have multiple studies at the same time or customize the app interface. Also, there is no web app for the participants. | |
Data Access & Analysis | |
With mEMA you need to export the data and move it to other software in order to view and analyze them further as there is no proper way to filter or query the data. | |
Other | |
Both mEMA and Avicenna lack a license management system, but the on-premise deployment and API-based access make Avicenna the winner in this category. |
For details, please check the comparison spreadsheet.