IMHO, data centers kind of need to be somewhat close to important population areas in order to ensure low latency.
You need a spot with attainable land, room to scale, close proximity to users, and decent infrastructure for power / connectivity. You can’t actually plop something out in the middle of BFE.
You can’t actually plop something out in the middle of BFE.
The number of data centers in Prineville/Hermiston/Umatilla/Boardman, OR beg to differ. Power is cheap due to the Bonneville dams and that trumps latency as they’re BFE as hell unless you live in Portland.
While latnecy matters sometimes, there’s still a lot of data center services that care a lot less and can be put anywhere.
One of those cities is pretty close to Redmond. The other 2 are 2-3 hours away from a major population center. The San Francisco equivalent would be data centers in Sacramento. Not exactly next door, but close enough to ensure that latency isn’t terrible for loading an e-commerce site or something.
I remember reading a story about an email server that was limited to sending emails within 150 miles. Through a lot of digging, they found it was due to an auto-timeout timer getting reset to 0ms. Anything further than 150 miles would cause a 1ms delay and thus get rejected for taking too long.
For the majority of applications you need data centers for, latency just doesn’t matter. Bandwidth, storage space, and energy costs for example are all generally far more important.
need to be somewhat close to important population areas
They really don’t. I live in regional Australia - the nearest data center is 1300 miles away. It’s perfectly fine. I work in tech and we had a small data center (50 servers) in our office with a data center grade fibre link - got rid of it because it was a waste of money. Even comparing 1300 miles of latency to 20 feet of latency wasn’t worth it.
To be clear, having 0.1ms of latency was noticeable for some things. But nothing that really matters. And certainly not AI where you’re often waiting 5 seconds or even a full minute.
The earth has a circumference of 25,000 miles, and the speed of light in a fiber cable is 124,000 miles per second, so going the whole way around the earth would take .2 seconds(assuming you could send a signal that far).
IMHO, data centers kind of need to be somewhat close to important population areas in order to ensure low latency.
You need a spot with attainable land, room to scale, close proximity to users, and decent infrastructure for power / connectivity. You can’t actually plop something out in the middle of BFE.
The number of data centers in Prineville/Hermiston/Umatilla/Boardman, OR beg to differ. Power is cheap due to the Bonneville dams and that trumps latency as they’re BFE as hell unless you live in Portland.
While latnecy matters sometimes, there’s still a lot of data center services that care a lot less and can be put anywhere.
One of those cities is pretty close to Redmond. The other 2 are 2-3 hours away from a major population center. The San Francisco equivalent would be data centers in Sacramento. Not exactly next door, but close enough to ensure that latency isn’t terrible for loading an e-commerce site or something.
I remember reading a story about an email server that was limited to sending emails within 150 miles. Through a lot of digging, they found it was due to an auto-timeout timer getting reset to 0ms. Anything further than 150 miles would cause a 1ms delay and thus get rejected for taking too long.
In case anyone wants to read that: https://www.ibiblio.org/harris/500milemail.html
For the majority of applications you need data centers for, latency just doesn’t matter. Bandwidth, storage space, and energy costs for example are all generally far more important.
They really don’t. I live in regional Australia - the nearest data center is 1300 miles away. It’s perfectly fine. I work in tech and we had a small data center (50 servers) in our office with a data center grade fibre link - got rid of it because it was a waste of money. Even comparing 1300 miles of latency to 20 feet of latency wasn’t worth it.
To be clear, having 0.1ms of latency was noticeable for some things. But nothing that really matters. And certainly not AI where you’re often waiting 5 seconds or even a full minute.
The earth has a circumference of 25,000 miles, and the speed of light in a fiber cable is 124,000 miles per second, so going the whole way around the earth would take .2 seconds(assuming you could send a signal that far).
Sure, but infrastructure is not just fiber, and there is a lot of stuff in between your long stretches of fiber.
I’m not a sys ops guy, but I can pull from different data centers and see measurable differences
This is a pretty well known phenomenon. That’s why we have cloud data centers located close to major metro areas.