suttacentral
suttacentral copied to clipboard
SuttaCentral website application
SuttaCentral server and client repository
Deploying
$ git clone [email protected]:suttacentral/suttacentral.git$ cd suttacentral$ git checkout production$ make prepare-host$ make run-production-env-> Supply needed env variables, if you chose random you will be prompted back with generated values. Remember them! You will use some of them to access admin services.- Done
Or updating in individual steps
cd /opt/suttacentralgit pullmake generate-env-variables-> Supply needed env variables (only if env has been changed)make run-prod-no-logs-> run docker containersmake delete-database-> OPTIONAL: Skip this if data hasn't seriously changed (do it, if texts have been deleted or renamed)make migrate-> only needed for delete-database, but harmless to run anywaymake load-data-> load data into arangodbmake index-elasticsearch-> load data into elasticsearchmake reload-uwsgi-> make sure flask server is not serving cached stale data
Minimally disruptive update
If no containers need to be rebuilt then this is all that needs to be run:
cd /opt/suttacentralgit pullrun frontend-buildermake load-datamake reload-uwsgimake index-elasticsearch
Changing the branch(s) the server, or staging server, uses
cd /opt/suttacentralgit checkout <code-branch>cd server/sc-datagit checkout <data-branch>cd po_textgit checkout <po-branch>
Then run the commands for updating, probably including the make delete-database step.
Development
1. Server
1.1 Running the project
- Install docker and docker-compose.
- Clone the repo
git clone [email protected]:suttacentral/suttacentral.git. - Cd into the repo
cd suttacentral. - run
make prepare-hostin order to make some small adjustment on the host machine so that we can run ElasticSearch. -
- 1st time run: run
make run-preview-env- Build images, load data, index-elasticsearch and more. - normal run: run
make run-dev.
- 1st time run: run
1.2 Loading the data
- ensure server is up and run
make load-data. - To index elasticsearch run
make index-elasticsearch.
1.3 Docs
API documentation is available at /api/docs.
Swagger documentation is generated from doc strings in api methods. The docstring should use OpenAPI specification 2.0 yaml format. This yaml docstring will be interpreted as OpenAPI's Operation Object.
Development
In this mode server, nignx, client dirs are mounted in Docker's containers so that any local changes take place in the container as well.
In addition Uwsgi+Flask expose port 5001 on local host, arangodb port 8529 and elasticsearch ports 9200 and 9300.
1.4 Makefile
There is a Makefile with following commands:
prepare-host- Setvm.max_map_countto262144because otherwise ElasticSearch won't work. And setup client git-hooks.run-dev- Run containers in development mode.run-dev-no-logs- Run containers in development mode without output to the console.run-prod- Run containers in production mode.run-prod-no-logs- Run containers in production mode without output to the console.migrate- Run migrations in flask container.clean-all- Remove all containers, volumes and built imagesreload-nginx- Reloads Nginx.reload-uwsgi- Reloads uWSGI+Flask.prepare-tests- Starts containers in test mode and wait for start-ups to finnish.test- Run tests inside containers.test-client- Run only frontend tests.test-server- Run only server test.load-data- Pulls most recent data from github and loads it fromserver/sc-datafolder to the db.delete-database- Delete database from ArangoDB.index-elasticsearch- Index ElasticSearch with data from the db.run-preview-env- Fully rebuild and run most recent development version.run-preview-env-no-search- Fully rebuild and run most recent development version but does not index ElasticSearch.run-production-env- Fully rebuild and run most recent production version. You will be prompted with questions regarding env variables.generate-env-vairables- Runs env_variables_setup.py script and generate env variables for production version.
1.5 Working with ArangoDB
Our project is using ArangoDB on the back-end. In the development mode it exposes port 8529 on the localhost. You can access its web interface on http://127.0.0.1:8529.
In the code that is running in the docker containers you can access the database on the adress sc-arangodb on the same port.
In the development mode:
Login: root
password: test
In order to change password you have to change ARANGO_ROOT_PASSWORD in env's .env fiel eg. If you want to change it in development env you have to edit .dev.env file.
1.6 Nginx proxy
Our project is using nginx as a HTTP reverse proxy. It is responsible for serving static files and passing /api/* endpoints to the uwsgi+flask server.
1.7 Working with elasticsearch
Expose ports 9200 and 9300.
1.8 Flask + uWSGI
Flask is hidden behind uWSGI. uWsgi communicate with nignx with unix socket. The socket file (uwsgi.sock) is in socket-volume shared beetwen nginx and flask+uwsgi
Creating db migrations
In order to create database migration in out app you have to follow those simple steps:
- in
server/server/migrations/migrationsfolder create file with name<migration_name>_<id of the last migration + 1>.py. - Add this line at the top of the file:
from ._base import Migration. - Create class that inherits from
Migrationclass. - Set
migration_idclass attribute to match the file name. - create some tasks. Each task should be separate method accepting only
selfas a parameter. - Set tasks =
['first_task', 'second_task', ...]in class attributes. - You are good to go just remember to never change the 'migration_id'. otherwise your migrations might fail.
For example:
from common.arangodb import get_db
from migrations.base import Migration
class InitialMigration(Migration):
migration_id = 'initial_migration_001'
tasks = ['create_collections']
def create_collections(self):
"""
Creates collections of suttas and collection of edges between them.
"""
db = get_db()
graph = db.create_graph('suttas_graph')
suttas = graph.create_vertex_collection('suttas')
parallels = graph.create_edge_definition(
name='parallels',
from_collections=['suttas'],
to_collections=['suttas']
)
Flask manage tasks
python manage.py migrate- Run migrations.python manage.py list_routes- Lists all available routes/URLs.
1.10 Style guidelines
-
Follow PEP8 for Python code.
-
Try to keep line width under 120 characters.
-
Use formatted string literals for string formatting.
-
Use Type Hints whenever possible.
-
In views methods (get, post, etc.) Use YAML OpenAPI 2.0 object format in docstrings.
-
For the rest of docstrings use google style docstring.
-
Code for the API endpoints should be places in
apifolder, except of thesearchendpoint. -
Names, variables, docstring, comments, etc. should be written in english.
-
Test files should be placed in
testsdir in directory where tested file is.
2. Client
2.1 Style guidelines
- Based on the Airbnb JavaScript Style Guide for JS code...
General considerations:
-
Use template strings.
-
Use ES6 classes (
class MyElement extends Polymer.Element) instead of the oldPolymer({...})syntax when declaring an element inside your <script> tags. -
Use
const/letinstead ofvarwhen declaring a variable. -
Use
===and!==instead of==and!=when comparing values to avoid type coercion. -
Comments explaining a function's purpose should be written on the line directly above the function declaration.
-
Internal HTML imports should come after external ones (from bower_components) and be separated by a newline.
-
When commenting Components at the top-level (above
<dom-module>), keep HTML comment tags (\<!--&-->) on their own separate lines. -
Try to keep line width under 120 characters.