Difference between revisions of "Talk:DCLite4G"
Wiki-JoWalsh (talk | contribs) |
|||
Line 1: | Line 1: | ||
OAI-PMH - stefan, why don't we want to require GetRecord ?? | OAI-PMH - stefan, why don't we want to require GetRecord ?? | ||
+ | :Just thought, ListRecords (incl. temporal query parameters) would be enough for harvesting. But I don't mind. -- [[User:Sfkeller|Stefan]] 09:30, 15 March 2007 (CET) | ||
== Interfaces == | == Interfaces == |
Revision as of 00:30, 15 March 2007
OAI-PMH - stefan, why don't we want to require GetRecord ??
- Just thought, ListRecords (incl. temporal query parameters) would be enough for harvesting. But I don't mind. -- Stefan 09:30, 15 March 2007 (CET)
Interfaces
My current thinking after talking to pramsey about Simple Catalog Interfaces is to separate out interfaces for different classes of data search / retrieval tasks. According to this narrative: data, services (including presentation) and "relationships" are three different classes of thing which CSW/ebRIM is trying to treat of all at once and that is why it is being so slow and overcomplex.
Service metadata
Many people are more interested in repositories of information about web services. They want to do realtime access "find-bind".
Package metadata
Right now this is the domain of things like shape files, GML files etc. Theoretically it could be any resource you visit and the resource doesnt change between visits. This also applies to local filesystem and data management.
Publishing metadata
Other people are more interested in the publish side - real data syndication between repositories - the kind of thing that OAI-PRE will one day support. In the meantime we have to make something out of what we have got. Jeroen writes,
For the harvesting of the catalog itself, GeoNetwork has a custom interface/ process. This process will go out to some XML service (provided by another GeoNetwork node at this stage) and will request a very brief result set from the other catalog that contains file identifier (UUID), time stamp and catalog ID. By comparing those with its internally cached records it will decide to request new or updated records and it will remove those not found anymore.