[OSM-dev] robots.txt

Nic Roets nroets at gmail.com
Sun Aug 15 20:12:52 BST 2010


Hello Anthony,

AFAIK, robots.txt only applies to recursive downloads. Given that file
names follow simple patterns and timestamp files exist, it is really
not necessary to run recursive spiders. That said, wget and curl can
be told to ignore robots.txt.

Regards,
Nic

On Sun, Aug 15, 2010 at 6:39 PM, Anthony <osm at inbox.org> wrote:
> I see http://planet.openstreetmap.org/robots.txt now has User-agent: *
> and Disallow: /
>
> Are we allowed to download the minute-replicate files as they become
> available?  If not, what's the point of having them?
>
> _______________________________________________
> dev mailing list
> dev at openstreetmap.org
> http://lists.openstreetmap.org/listinfo/dev
>



More information about the dev mailing list