Skip to main content

Compared to m-Path

First Impression: For Research or Practice?

Our main impression on m-Path is that it's not clear whether the product is designed for research or for care delivery and practice. This confusion continued throughout using m-Path. The team has done some work to differentiate these two, such as defining separate dashboards for each case, but it did not help. That's why many aspects of m-Path does not fit the workflow of a research study. For example:

  1. There is no Participant. You enroll a Client.
  2. There is no Study for participants to enroll. Rather, Clients add Practitioner in their app.
  3. There is no concept of research team working on a study. In fact, outside of pricing, the concept of research team and access levels are unclear.

That's why in order to use m-Path in a study, you should first learn the functionality, and then map it to the research flow you are familiar with. For example, to enroll participants, you should invite them as Client and then ask them to add you as a Practitioner.

The problem becomes more pronounced because the software interface and the documentation does not follow a consistent terminology. For example terms such as Beep, Notification, Trigger, or Reminder, each may refer to different concepts in different parts of the system. These issues make the product learning curve very steep.

More Product Planning and Design Could Help

On the other hand, the team in m-Path have been actively adding features to their product. Specially for survey, cognitive tasks, and participant data feedback, the product offers a wide range of options and capabilities.

But for many of these features, it feels the implementation was slapdash, lacking adequate thought and planning. A few examples we noted are:

  1. Surveys have an attribute which is labeled Required. Intuitive assumptions is that this property makes completion of the survey mandatory for participants. But it actually means "the Internet connectivity is required to upload the responses.".

  2. Some technical terms are brought to surface and passed onto researcher which for non-developer users is hard to understand. Examples are whether an item should be cached locally, or whether Command item should run synchronously or asynchronously.

  3. Question types in a survey start by what we all expect to be a question (text, multiple choice, date and time, slider, etc.) and evolve to cover everything the software offers, such as "Notify Person", "Set Home Button", or "Add Applet". At some point documentation refers to them as "Items", but the design remains inelegant.

  4. The box to type question's content is just a small one-line box, in a screen surrounded by many other items not needed at the time of creating a survey. We assumed this is to encourage researchers to type short half-line questions. But in the examples, you see you can type full HTML content there!

  5. At times, the participant may be asked to "Install a Button". Even after working with the software for a few days, we could not fully understand what that means, let alone participants who often spend a lot less time and attention.

So the approach of rapid development has allowed the m-Path team to quickly load the software with tons of functionality, but shortly after working with the product, you wish some more product planning was done beforehand.

Limited Server Resources

Certain parts of the software are very uncommon. For example:

  1. m-Path stores media files for audio and video questions in a 3rd-party server.
  2. Sensor data are sent to a 3rd-party server straight from the app.
  3. To include image or audio in the app, you must upload them elsewhere (e.g. a file-sharing server), and put the URL in the app.

The most likely explanation is that the team has limited server resources to handle all data, and that's why they have offloaded that to another server. It both saves development time and reduces load on servers.

The problem with this approach is that it introduces unnecessary security risk, which in turn makes IRB approvals and institutional security reviews more complex. Further, data validation becomes a lot harder, if not impossible, because the data is fragmented between two separate servers.

m-Path Strength

The main differentiator of m-path is its ability to feedback data to participants. You can present the participants with their past input data, either in raw format or plotted in basic charts (pie-chart or bar-chart). You can further aggregate responses to a given question over a cohort within your study, or the entire study population, and present that as an intervention to participants. The ability to do these directly from the dashboard without any coding is valuable for some researchers. Achieving this in Avicenna requires embedding an HTML in the app, and customizing the content of that page. While it is more flexible, it requires custom development, which is an obstacle for common usecases.

m-Path also offers some nice features, such as the ability to share surveys and protocols between other researchers, that can foster collaboration and survey reuse in research. This is similar to the work by Dr. Kirtley and colleagues at ESM Item Repository.

Feature Category Comparison

The following table compares Avicenna and m-Path on different categories of features:

CategorySuperior
Study Setup & Deployment
While m-Path offers some nice ideas such as protocol and survey sharing, it lacks many features such as proper localization, audit, multi-site support, or even study management.
Notifications
m-Path offers very limited support for notifications. No support for different mediums, delivery confirmation and monitoring, event-triggered notifications, and so on.
Participant ActivitiesTie
m-Path's triggering logic and activity session management covers most basic cases, although not intuitively. While they fall short in easy creation and management of activity sessions, their powerful scripting (pseudo R) helps them to get a good score.
Survey
m-Path offers good control over the layout of every survey page and supports complex branching thanks to its computation item. But it leaves many features to be desired such as a proper survey editor, more survey question types (not "Add a button" question). That's before getting to more advanced features like researcher-completed surveys, or support for editting responses.
Interventions & Cognitive Tasks
m-Path offers a good set of cognitive tasks in their app. They can easily be integrated into the surveys, and there are many options to control their layout. Further, you can show different visualizations on data from an individual, a group of participants, or the full study population.
Gamification & Rewarding
m-Path offers gamification and rewards within their app. Combined with their Computation Item, this becomes a useful feature to define different garmification and rewarding flow for your study.
Software Security & Reliability
m-Path does not support defining roles, access levels, and permissions. It's even unclear how to share a study between members of a research team (partly due to thier focus on care delivery).
They do not have a proper terminology or software release updates. Also they do not support many security features as defined in HIPAA and FDA 21 CFR Part 11, although they are just good security practices.
Sensor Data & Wearables
Support for smartphone sensors is very crude and ad hoc. Participants need to install a separate app, the app requests all permissions for all sensors at once, and data is uploaded to a 3rd-party server. No wearable support is available. Also as the data is moved to a separate server, m-Path does not offer any activity triggering using sensor data or any visualization on that data.
Participant app
The participant app has some limitations such as no web-based interface or no offline support, and has some poor design decisions, such as referring to T&C and EULA of the product as Consent, causing confusion with study's consent.
Aside from these, we still found the app's user experience confusing, due to confusing nature of the system. For example when using the app as a participant, we could not figure out what is Applet? What happens when applet replaces my home screen? or why I'm being asked to set button on my home screen?
Data Access & Analysis
While m-Path supports a wide range of options to show data to participants, there is not much to do with the data in the researcher's interface other than exporting it. There is no option to see data on the dashboard. Note that sensor data and media files are stored at a 3rd-party servers, so you cannot access them through your m-Path interface anyway.
Other
m-Path offers a good interface for managing license, usage, and payment via the researcher dashboard. Also to the best of our knowledge, they do not offer branded applications and custom on-premise deployments.

For details, please check the comparison spreadsheet.