nroets at gmail.com
Sun Aug 15 20:12:52 BST 2010
AFAIK, robots.txt only applies to recursive downloads. Given that file
names follow simple patterns and timestamp files exist, it is really
not necessary to run recursive spiders. That said, wget and curl can
be told to ignore robots.txt.
On Sun, Aug 15, 2010 at 6:39 PM, Anthony <osm at inbox.org> wrote:
> I see http://planet.openstreetmap.org/robots.txt now has User-agent: *
> and Disallow: /
> Are we allowed to download the minute-replicate files as they become
> available? If not, what's the point of having them?
> dev mailing list
> dev at openstreetmap.org
More information about the dev