Identify Critical User Journeys
Context
During each release, we follow a manual testing checklist to make sure critical user journeys work as expected. However, this is not a comprehensive collection and there are more user journeys, including the ones found in our tutorials and how-to guides.
I'm opening this issue to collaboratively define the most important user workflows in Nebari, focusing on what users do most often or where things are most fragile.
Value and/or benefit
Identifying these journeys will guide future testing and help us prioritize development efforts.
Anything else?
Feel free to:
- Comment with workflows you rely on or have seen users depend on
- Link to related GitHub issues or discussions
- Help write or refine the journeys outlined below
User journeys
| User journey | Description | Docs | Persona |
|---|---|---|---|
| Create a new user/group | Log-in as an admin to Keycloak and create a new user/group with specific permissions | https://www.nebari.dev/docs/how-tos/configuring-keycloak | Admin |
| Create a conda environment | Log in to conda-store and create an environment in a shared namespace and another environment in the personal namespace | https://www.nebari.dev/docs/tutorials/creating-new-environments | End User / Admin |
| Use Dask | Launch a dask-gateway cluster, test the auto-scaler and validate the dashboard is accessible ia the dask-labextension | https://www.nebari.dev/docs/tutorials/using_dask | End User |
| Submit a notebook job | Use Jupyter Scheduler to submit a Notebook as a job | https://www.nebari.dev/docs/tutorials/jupyter-scheduler | End User |
| Use VS Code | Open the VS Code extension, install the Python extension and run a Python script | https://www.nebari.dev/docs/how-tos/using-vscode | End User |
| Use a GPU | Launch a JupyterLab server with a GPU enabled, run the nvidia-smi command, and use PyTorch to validate whether GPUs are available |
https://www.nebari.dev/docs/how-tos/use-gpus | End User |
| Create a Python app with jhub-apps | Create, deploy and share a Python app using jhub-apps | https://www.nebari.dev/docs/tutorials/create-dashboard | End User |
| Use Argo Workflows | Submit a workflow using Argo | https://www.nebari.dev/docs/tutorials/argo-workflows-walkthrough | End User |
| Create a Grafana Dashboard | Log in to Grafana and create a Dashboard | End User / Admin | |
| Use Loki | Log in to Grafana and run a Loki query to access system logs | https://www.nebari.dev/docs/how-tos/access-logs-loki | End User / Admin |
Thanks for opening this issue @marcelovilla, I agree this will be really valuable to document!
I think it would be worthwhile adding a column for User Personas, especially now that we have updated, simpler personas (Google Doc, WIP PR to update personas on website).
This mapping of user journeys to personas would also help us understand our user base and their needs better and these artifacts complement each other beaitifully!
I'd love to work with you on this. We can start by assigning personas to the existing workflows and identifying any missing ones based on user personas.
Hey @smeragoel, thanks for your suggestion! I like the idea and I've added a persona column to the table.
From the three personas outlined in the document, I think these user journeys cover the Admin and the End User personas. Workflows/journeys related to the SysAdmin persona should be covered in our deployment tests and can be worked on separately.
@marcelovilla - we are struggling to figure out how to transfer the ability to interact with the cluster (redeploy, etc) to another user.
Since we currently don't have anyone to pick this one, I will unpin and prioritize. But it's on your backlog.