Just a heads up for Kim and the other at MA. The post by Kitch is typical for people that know what going on. Remember that before you told us that the servers went BOOM everybody was bitching and calling names at you because stuff was slow and laggy. Now that we know we are much more forgiving because we know whats going on and that it is most likely something that was beyond your control. I would suggest to release that kind of info next time too, and preferably a lot sooner and maybe even in more detail (what happened to the old servers). Same goes for when VU's get postponed or take a lot longer to implement then you told us upfront. The community will be much less hard on you if you are honest with us.
Just caught that from its source, Recovery Expectation. It is sad that a player has to speculate the reason (Neil @ 15:26 Today) within this thread before someone from MA (Kim son of Jan @ 19:14 Today) decided to respond later in the thread to the speculation being correct. Now for those who do not go looking to find a needle in a haystack and do not care to read the forums. I wonder if they will post it in the client loader as an update to the userbase universe wide. Better late than never. What MMOG Teaches the World About Critical Infrastructure http://insights.wired.com/profiles/...-gaming-teaches-the-world-about#ixzz395PvtTjE Putting Users First In the consumer-first landscape of online gaming, the user experience is a key competitive advantage in the crowded market of global companies offering sophisticated games. With recent developments expanding broadband and mobile access, consumers expect premium content across an ever-increasing number of devices, as and when they want it. The growth in real-time media services will undoubtedly intensify demands: in 2013, real-time entertainment constituted the majority of (61 percent) peak period internet traffic, up from 58.6 percent in 2012. That is why many gaming providers are turning to carrier-neutral colocation facilities, which allow them to interconnect directly to all of the necessary network service providers to ensure optimal Quality of Service (QoS). Some carrier-neutral colocation providers even feature densely-connected Content Hubs, comprised of leading ISPs, CDNs and Internet exchanges. With such a broad choice of connectivity options, gaming providers can optimize their content distribution to performance to various devices, while building in diversity, resilience and redundancy. For instance, the leading gaming provider MindArk hosts its servers with Interxion in order to guarantee its users high-speed PvP interaction for the ultimate MMOG experience. Like REALLY! Putting Users First Entropia has been down for 5 hours+ on a Saturday Never the less a response was discovered from MA. In all fairness, connectivity & hardware (re: server) resilency and redundancy are two seperate points. It looks like MA were not interested in the hardware resilency & redundancy part. /facepalm MA charges its clients exuberant costs to play and can not even get this part right. Time for a reassessment maybe where cost cutting & (re)investments need to be made. Funny how the intended goal mentioned in the blog does not turn out when these other outlined aspects (Server Hardware) are not covered also (foresight) in the outcomes produced since servers went back online and will continue to do so for another week (click for an example).
Nice of them to come out and say so. It's pretty obvious now they are using it that the backup works ok but doesn't work "well". I get very confused as to why they didn't just come out and say: "Hey guys, major hardware failure of unprecedented scale here. Bear with us while we get the backup running, alas these things happen at the worst times so we may be slower than usual due to a skelton crew. We don't know how well the backup will cope while we wait for replacement hardware to original/new spec but bear with us if it isn't cutting the mustard as good as the normal one. We will of course be reviewing performance following this outage and if necessary looking to improve the backup solution for next time. We'll let you know when we have new hardware in place, there will of course be a scheduled outage for this change over. In the mean time thanks for you patience and consideration while we get through this"
Agree nice and to the point. In terms of resiliency and redundancy, backup hardware is not required at all. As this inline fall-over hardware is integrated into the over-all solution. Nothing in the area of where the servers are located indicated anything that even the most basic datacentres are not built to handle in terms of power backup etc. In saying this, there was no lvl 0 event that would level the structure that housed the servers or would interfer with the operational ability of power-generators etc. The HP servers were not selected/configured with proper redundancy & resiliency in mind for their mission critical operational nature. Anyone in IT who is not grey channel (or oem whitebox) orientated and that had half a brain could tell you this. Never the less, time for bed, attention lacking and a mistake resulting in another task to be undertaken has been made. nn.