Overclock.net banner

1 - 4 of 4 Posts

·
Introibo Ad Altar Dei
Joined
·
2,066 Posts
Discussion Starter · #1 ·
Let's say you have a server farm and you could host all the data that for instance a bunch of newspapers wanted to upload to your farm. In order for the public to access that data, how much uploading bandwidth would be needed for a site that is similar to say nytimes.com or archive.org, what kind of upload speed would you need in order to support a huge simultaneous user base like that, I would think that you would need more than a u-verse line, wouldn't you need some serious bandwidth, like a custom deal with Cox or AT&T to support a website that large. How do the big sites like Facebook, Twitter, nytimes, archive.org all get that information out, where is that upload bandwidth coming from and how does one get into a project like that, server farm with tons of info made public for people to access.

+Rep for help, thoughts, ideas.
 

·
Registered
Joined
·
1,787 Posts
Most huge web systems are edge served (via companies like akamai, in fact if you open tcpview or a similar program, you'll find you'll be making a lot of connections to akamai). It basically means it's not all hosted in one place, kinda like load balancing, but it also creates redundancy.

A lot of media content (flash etc.) is also typically hosted on seperate servers to increase load times and decreases per server load.

I don't know what companies are using in the US, but here in the UK ethernet products are still mainstream for most companies (ethernet to the exchange), but large hosting companies have contracts to feed off of fiber backbones that can provide earthshattering bandwidth.
 
  • Rep+
Reactions: Atomagenesis

·
Registered
Joined
·
566 Posts
Some companies do colocation where they house their servers and other equipment into a datacenter owned by another company. Several colocation services offer "unmetered" connections which isnt "unlimited" but simply means they do not track how much data usage you use on your connection. You can get gig-E colocation unmetered but it's far more expensive but having several gig-e connected server in several different datacenters would satisfy even the most bandwidth hungry sites.

Some choose to build their own datacenter and get their own connection providers but I would say that if you're looking at something as big as archive.org or nytimes.com you'd probably need multiple multihoned OC12's at bare minimum.
 

·
Introibo Ad Altar Dei
Joined
·
2,066 Posts
Discussion Starter · #4 ·
Ok that makes sense, I suppose then a site that could host say anywhere from 250-10,000 users online simultaneously pulling down data, loading articles, streaming video, what have you, would mean it definitely could not be done in a residential area and would need serious infrastructure contracts to obtain bandwidth that could support something like that. I am working with someone and we may have the possibility to do something like this for a very powerful client, but I am being consulted on the infrastructure end in making this happen, so my curiosity lies more on the bandwidth level, because server and switch wise, thats not the issue, the only thing I'm worried about is bandwidth.
 
1 - 4 of 4 Posts
Top