- Docker
- docker-compose
- A couchdb data source with data, as generated by dainst/idai-field-client
The Web
application stack is made up of four components, all of which are Docker containers.
- elasticsearch
- cantaloupe
- api
- ui
Given that one had already set all configurations properly, one could start the whole stack via docker-compose up
. However, it is advisable to configure and start them one by one (for example start the elasticsearch container with docker-compose up elasticsearch
), in order to make sure things work properly.
Acts as the primary datastore. api
, and ui
through api
depend on this service running. It can be started via
docker-compose up elasticsearch
To do a hard reset of the data during development, do
$ rm -rf data/elasticsearch
$ mkdir data/elasticsearch
Prepare the configuration:
$ cp api/config/dev_prod.exs.template api/config/dev.exs
$ vi config/dev.exs # Edit
Set up a connection to a couchdb instance
couchdb_url: "<url>",
couchdb_user: "<user>",
couchdb_password: "<pass>",
and configure at least one project
projects: ["<project>"]
If you do not have a project yet, the simplest way to aquire one would be to start Field Desktop and set the couchdb_url
to ip_of_host_as_seen_from_within_the_container:3000
and set projects: ["test"]
. Do a websearch of how to obtain the correct ip for the system you run docker on.
The goal here is to ingest and index one or more project from a couchdb
into our elasticsearch
. We can trigger the ingest process and query for the documents via rest api calls against our api. To get there as quickly as possible, we make sure for the following curl commands will work withouth any authentication. For that, we give the anonymous user admin rights:
users: [%{ name: "anonymous", admin: true }]
Assuming elasticsearch
already runs, we start the api
and trigger the process of reading in and converting the contents from a couchdb (filled with suitable Field data) by
1$ docker-compose up api
2$ curl -XPOST localhost:4000/api/worker/reindex
Obeserve the logging in $1
to see when the process finished. After that, call
$ curl localhost:4000/api/documents
to see the all the documents.
Visit the ui
directory
$ cd ui
$ npm i
Prepare the configuration:
$ cp src/configuration.json.template src/configuration.json
$ vi src/configuration.json # Edit
Set fieldUrl
to localhost:4000/api
. Start the iDAI.field UI with npm start
and visit http://localhost:3001
.
If you used the test
project of Field Desktop, then enter testf1
into the search field and click the search
button and you should have one hit. Click on it to see more.
To try out the alternative iDAI.shapes UI
, set shapesUrl
and start the application with npm run start-shapes
and visit http://localhost:3002
.
For conversion of images and tiles run
$ curl -XPOST localhost:4000/api/worker/conversion # if one has images from the client
$ curl -XPOST localhost:4000/api/worker/tiling # if there are georeferenced images
If you have images, place them under data/cantaloupe
(or override the docker-compose configuration as described further below, to change the default location)
The Api.Auth
section of the active config file dev.exs
has two parts, users
and readable_projects
.
The users
array has three possible keys: name
, pass
and admin
. The name and pass values
are used to authenticate users, either directly via the API or via the user interfaces of either iDAI.field or iDAI.shapes. The admin property is optional and a boolean value which allows a given user to access all projects as well as to control all administrative functions via the API.
There is one user, which always exists, even when not declared in the users array, which is anonymous
. If one wants to speed up development, one can grant this user admin rights by declaring %{ name: "anonymous", admin: true }
(the pass
property is not necessary here).
The readable_projects
section then determines, which of the configured projects can be seen by which users. readable_projects
is a map, with user names as keys and arrays for the readable projects (chosen from :api.projects
) the corresponding user. Here one can also specify which projects can be seen by anonymous users. For that, simply use the :anynomous
or "anonymous"
key and list the projects publicly accessible.
Users can sign in with the configured credentials via the user interfaces and see their readable projects. In addition to that they see all publicly accessible projects, which are, of course, also accessible without any login.
In addition to handling of requests from the user interface, direct calls to API allow to access some extra administrative functionality.
As already said, development can happen with an anonymous user endowed with admin rights. The curl statements listed all refer usually to protected endpoints which are only accessible with admin permissions. To obtain a token via the api, one can do the following:
$ curl -d '{ "name": "user-1", "pass": "pass-1" }' -H 'Content-Type: application/json' localhost:4000/api/auth/sign_in
The obtained token then can be used on subsequent requests to authenticate and authorize for using the protected endpoints.
$ curl -H "Authorization: Bearer [TOKEN]" localhost:4000/api/documents
For simplicity, we omit such authentication when listing calls to curl
here.
To index a single project
$ curl -XPOST localhost:4000/api/worker/update_mapping # necessary at least once before reindexing any project
$ curl -XPOST localhost:4000/api/worker/reindex/:project
$ docker-compose run --service-ports --entrypoint "iex -S mix" api
$ docker-compose run --service-ports --entrypoint "mix test test/app && mix test --no-start test/unit" api
or
$ docker-compose run --entrypoint "/bin/bash" api
$ mix test test/app # application/subsystem tests
$ mix test --no-start test/unit # unit tests
The frontend runs on port 3001. It autmatically picks the next available port if 3001 is already in use.
To build for production use:
$ npm run build
$ docker-compose up --build api
In order to add a dependency it has to be added to mix.exs
. Afterwards, the api docker container
has to be rebuilt explicitly with:
$ docker-compose build api
After removing a dependency from mix.exs
the following command has to be run inside api/ to make
sure mix.lock
reflects the change:
$ mix deps.clean --unused --unlock
Afterwards, the api docker container has to be rebuilt explicitly with:
$ docker-compose build api
in config.exs
In order to be able to see images you can override the images volume by creating
a docker-compose.override.yml
that contains the volume definition. This
file gets automatically picked up by docker-compose
and will not be published
to the repository.
docker-compose.override.yml
version: "3.7"
services:
cantaloupe:
volumes:
- "/host/environment/path/to/images/project_a_name:/imageroot/project_a_name"
- "/host/environment/path/to/images/project_b_name:/imageroot/project_b_name"
api:
volumes:
- "/host/environment/path/to/images/project_a_name:/imageroot/project_a_name"
- "/host/environment/path/to/images/project_b_name:/imageroot/project_b_name"
Prerequisites: Make sure that ui/src/configuration.json
points to the correct URLs to
the production version.
Use docker-compose push
to publish the docker images to dockerhub.
Afterwards make sure to pull the latest image versions from dockerhub in the respective environment (e.g. portainer).
Make sure to update the mapping and reindex all projects if the new version contains changes to the elasticsearch mapping or preprocessing.