Of course, we're not talking about that intensive of a requirement. It's not like they are Netflix, having to deliver 30% of all North American Internet traffic during peak evening hours or Facebook taking in 500TB of data a day and managing a database of 100 PB. We're just talking about text files here. There are plenty of technical ways to deliver content globally while minimizing bandwidth and load balancing, things like CDNs or local mirrors and ways to deal with spikes in traffic such as on-demand computing. These are used all the time, in the real world. My guess is that an overbearing demand for security of this unclassified and public info has led them to avoid all these approaches. So yes, people shouldn't be dicks when doing their queries and the API should have features to limit those usages, but their severe should be more robust as well. ------- Brian Weeden Secure World Foundation +1 202 683-8534 On Sep 1, 2012, at 3:28, Jonathan McDowell <jcm@head.cfa.harvard.edu> wrote: > >> The reason they are going this route > > Let me just be clear that the actual phrasing was more polite - but > that was the sense I inferred. > > And it seems reasonable - if everyone downloads the whole catalog every few > hours when only a few objects have changed, that's very wasteful of bandwidth. > It's pretty amazing that this resource is available to the world's public... > we should be prepared to do the work of keeping track of the last data > we downloaded and just updating with the new stuff. > > jonathan > _______________________________________________ > Seesat-l mailing list > http://mailman.satobs.org/mailman/listinfo/seesat-l _______________________________________________ Seesat-l mailing list http://mailman.satobs.org/mailman/listinfo/seesat-l
This archive was generated by hypermail 2b29 : Sat Sep 01 2012 - 10:51:34 UTC