For once, problem is not on Gree's side.
https://twitter.com/hashtag/aws?f=tw...rtical=default
For once, problem is not on Gree's side.
https://twitter.com/hashtag/aws?f=tw...rtical=default
You sure?
http://status.aws.amazon.com/
Which of these servers would that be?
"Elevated latencies and Faults", "Increased Error rates" and "Increased API Errors" everywhere but mainly the DynamoDB in North Virginia
It impacts a lot of websites and services right now, not only our beloved game.
And many people are reporting that this status page is not reflecting the actual situation
Well I'm in UK and locked out completely
I m in france and can t log in MW too
And can t post message in groupme
WDT( VFF) 554085004
Yes in this case its an aws failure causing the problem..
Let's blame Gree anyway.
Have fun, it's only a game
Seems like AWS got aware of the situation just 10 minutes ago...
looked at there page lots of errors in N Virginia
485046391 & 565411567
Ju100 any way to find out if there taking care of it and if they have a expected time of fix
Yep, let's hope it's nothing massive.
On another note : it's funny to see how Gree is reacting so fast.
It's really nice of them to have someone on calls during week ends, especially since it's when we get massive faction events!
Shhh, don't tell Gree, they might fix something they wouldn't otherwise have.
I don't see any official statement from Amazon. Either it's good because they're using their time to fix it quickly instead of posting on twitter. Or it's bad because they have a tortoise's reaction time on this sunday morning...
31 affected services: it's bad. I'm not even sure it was that bad when it happened some years ago...
Netflix still running ok in Europe, some complaining in the US
The DynamoDB service is down, and many AWS services are using it internally...that's great...
3:00 AM PDT We are investigating increased error rates for API requests in the US-EAST-1 Region.
3:26 AM PDT We are continuing to see increased error rates for all API calls in DynamoDB in US-East-1. We are actively working on resolving the issue.
4:05 AM PDT We have identified the source of the issue. We are working on the recovery.
4:41 AM PDT We continue to work towards recovery of the issue causing increased error rates for the DynamoDB APIs in the US-EAST-1 Region.
4:52 AM PDT We want to give you more information about what is happening. The root cause began with a portion of our metadata service within DynamoDB. This is an internal sub-service which manages table and partition information. Our recovery efforts are now focused on restoring metadata operations. We will be throttling APIs as we work on recovery.
Last edited by ju100; 09-20-2015 at 05:00 AM.