Difference between revisions of "Talk:AJAX WebMapping Vector Rendering Design"

From OSGeo
Jump to navigation Jump to search
m (Incremental rendering of complex datasets based on visible markers and other measures of signficance)
(No difference)

Revision as of 21:28, 20 May 2008

The object of the project would seem to be to provide accurate and accessible data in a format and interface best suited to TC's objectives, whatever they might be at that time; e.g. the framework must be flexible enough to present potentially disparate/incomplete and otherwise varying sets of data in a manner which satisfies all possible parameters for its use, within reasonable constraints of the technology and platform available at the time of development (and the 'reasonably' foreseeable future...)

With regard to tile maps: High-resolution satellite photos may be the most timely and accurate representation of topological features, but unless your monitor occupies the better part of the state of Kentucky, no one will be pulling up any significant part of it at maximum zoom any time soon. In this case, the issue of how to handle excess data does not present itself since the reduction of high-resolution analog data to a low-resolution (technically digital, but treated as analog) representation does this for us automatically.

Similarly, polygonal data should be rationally smoothed to adjust for the physical resolution at the time of rendering. Polygonal data removed (or rather just not rendered) due to streamlining at low-levels of magnification/zoom should be reincorporated into their original layers in real-time as magnification is increased. A simple algorithm used to determine the degree of 'flatness/curvature' at each zoom-level and hence the significance of each point could be used in initially creating these incremental data sets and at what level they should be morphed back into their original positions. Depending on the purpose for viewing the data, a different algorithm could be used for all aspects of rendering. Regardless, if only two points are required to render a line at a particular level of resolution, then rendering four or five would be .... pointless(?).

Similar algorithms could be applied to other data sets which don't present themselves to analysis of visual constraints but rather through other means of significance, such as population of towns and cities to determine which are shown at various levels of magnification, dynamic specification of occurences of particular diseases by population for the CDC, for example, etc.

Regardless, I think perhaps priority should be placed on enabling a simple plugin infrastructure which can be tweaked and cajoled in any way imaginable with very little effort so that potential usability is not restricted by any hard-wired preferences over how to interpret or present particular data. That in itself is perhaps the beauty of the potential presented by the dynamic use of true vector data... we're going back to the precision of analog in what we though was a digital world...( reminds me of the vector graphic Apollo workstations I programmed on at school. And Asteroids.)

Anyway, just a thought...