mf-geoadmin3
mf-geoadmin3 copied to clipboard
Use CSW geocat to feed content of infobox
prerequisite
- [ ] geocat team has a QS process in place to manually publish edits of owners in geocat
- [ ] uptime csw service needs to be above 99.5% and scal- granted
Then
- [ ] title
- [ ] short title
- [ ] Abstract
- [ ] id geo ig
- [ ] link to geocat
- [ ] link to scale number
- [ ] link to detailbeschreibung
- [ ] link to fachportal
- [ ] link to wms service
- [ ] Datenstand
- [ ] optional: url to legend
- [ ] feed IMPORT tool list https://github.com/geoadmin/mf-geoadmin3/blob/master/src/js/ImportController.js
can be updated by geocat directly
+100 for that in general Except for the datenstand: what about the auto-updates?
@ltclm This already the case no? Geocat info is directly imported the BOD. Now, I don't if it works with the new version of GeoCat.
@loicgasser That's already the case now, yes, the import scripts are working with the new geocat version. @davidoesch what exactly do you want to change with this ticket? Do you want the layer info to be fetched directly from the csw service?
yes "Do you want the layer info to be fetched directly from the csw service?" -> but before we need the QS to be done on geocat side ( up to now done in BOD)
we have to import / cache geocat in the bod, layer search sphinx indices have to be updated too with this information. but yes the qs has to be done in geocat.
This would be only for the infobox then. I am afraid the service will be a little slower. Also layer information is used to search for layers. What is the benefit of such an action? BOD is deployed and updated after each deploy.
Can we test performance to get this info from GeoCat directly? What's the rest services address?
The information can be extracted from the following service url: p.e. ch.are.alpenkonvention SERVICE URL: http://www.geocat.ch/geonetwork/srv/eng/csw?request=GetRecordById&elementSetName=full&outputFormat=application/xml&outputSchema=http://www.geocat.ch/2008/che&service=CSW&version=2.0.2&id=8698bf0b-fceb-4f0f-989b-111e7c4af0a4
the geocat2bod interface is using xpath to do that.
My concerns:
- If we use this service as data source for layer info, we will have to find a solution to update the sphinx layer search indices too.
- What happens if geocat is down?
From my point of view it's too much work for a less performant/stable solution. With a deploy every week this year metadata will be updated often enough.
If we still decide to go with it, I see 2 acceptable solutions:
- GeoCat service + front end templating
- Direct connection to GeoCat DB + templating backend
It would be great to have a common DB for GeoCat and MapGeoAdmin.
-csw uptime is an issue, I agree we don't control the service itis a SAAS contract with c2c
- the issue is, that geocat2bod is not done every week by IGKS (I agree, we try to fix here a process issue with a technological "workaround"
Clear process issue here as it shouldn't be automated because QS needs to be assured.
When IGKS does the geocat2bod, is everything on prod updated?
With the next deploy - yes
Not to forget: we have the meta info from geocat also in the getCaps of our WMS and WMTS. So at least, we need to cache this information somewhere for performance reaseon.
But I could well imagine an auto-import each x hours into bod prod...
?? i think this it is not an option to auto import and publish data content to map.geo.admin.ch from geocat without QS.
QS should be in geocat. That's why I said we have a process issue.
Currently, we have info in geocat and in map.geo that is not the same - that's really wrong on many levels.
We have geocat2bod which is a human driven process for QS. it is all in place
So what exactly is the change that this issue should address?
To introduce a qs process in geocat and therefore we can omit geocat2bod. Status: geocat3 : a qs has still not been introduced. So this ticket is on hold.
"it is all in place" confused me :=). So we wait on geocat QS process....
View continue to wait .... Ping sca once in a while
I waited now for geocat... long
Still propose to update SPHINX and BOD at least once an hour: so changes done by data owner are then published faster.
So on their side, they are OK with it? QS is in place?
@ltclm @loicgasser This would, IMO, simply mean to automate geocat2bod. What do you think the effort for this is?
(btw: it would be great to be able to work on change sets instead of full update on each iteration. Not sure of geocat supports that though)
on their site: no QS is in place ---- but if we offer hourly updates a faulty entry can be changed fast.
So we need to inform them about it before we do it. And everybody involved has to give the go.
Then we do it.
what exactly do you want to change, discussion here is a bit confusing [1]
- geocat ->bod_master
- geocat ->bod_master->dev->int->prod (sphinxsearch and api3 services, layerinfo, etc.)
- geocat-> api3 services (without bod)
- geocat -> wms-bgdi translation
geocat import is not only used for mf-geodmin, wms translation is also using this data. hourly deploy of bod, sorry but i think nobody wants that. can we move this to re4?
[1]
yes "Do you want the layer info to be fetched directly from the csw service?" -> but before we need the QS to be done on geocat side ( up to now done in BOD)
Still propose to update SPHINX and BOD at least once an hour: so changes done by data owner are then published faster.
+1 for RE4.
If we do it, then correctly. This means that all services that is using this meta information are updated. This includes WMTS/WMS Get capabilities. It could also mean that services using this data are set up differently, or we extract it from code.
We have 2 important goals for RE4:
- one source of truth
- if source of truth changes content, existing 'caches' need to update automatically
That's wild, I know.
this is already implemented / realized 80%:: https://docs.google.com/spreadsheets/d/1SER0o9E-tBO6hhxpN4-PRAkR9TMbl01gowlJsc4k260/edit#gid=0
the services in red are not possible at the moment, we have to get rid of po files in chsi and hit the bod everytime a tooltip click is done, which is not a good idea. We need to cache somehow the db requests.
There was already an LBM EPIC exercise and discussion about that https://jira.swisstopo.ch/browse/KOGIS_PRJ-770
??