Super fund boss and Google Cloud global CEO issue joint statement apologising for ‘extremely frustrating and disappointing’ outage
A week of downtime and all the servers were recovered only because the customer had a proper disaster recovery protocol and held backups somewhere else, otherwise Google deleted the backups too
Google cloud ceo says "it won't happen anymore", it's insane that there's the possibility of "instant delete everything"
Yeah there's that, and the fact that you have no control over how much the bill will be each renewal period. Those two things kept me off the cloud for anything important.
thats why i am trying to explain to my family since forever. their answer always amounts to something like "it would be illegal for them to look at my data!" like those companies would care. .
Hardly. It's several colocated computers/drives designed to survive major events. It's insane to me sys admins still think their 7 year old desktop sitting in a closet offers the same level of protections.
It's not the tools that offer protection. It's the practices and redundancies that matter. How often are you making secondary and tertiary backups, are those backups stashed in different locations and on different media?
This business owner made the right move by not relying on a single source for backups. Too many people and small businesses don't think like that. They assume one backup is enough.
If it's really important, you should be following the 321 backup rule, no matter if you're using Google cloud or the old gateway in a closet. Without multiple backups, you're always putting your eggs in one basket. It doesn't matter how much you trust that basket, it's a dumb idea to only rely on one backup for anything important, even if its Google cloud or AWS.
They said the outage was caused by a misconfiguration that resulted in UniSuper’s cloud account being deleted, something that had never happened to Google Cloud before.
Bullshit. I've heard of people having their Google accounts randomly banned or even deleted before. Remember when the Terraria devs cancelled the Stadia port of Terraria because Google randomly banned their account and then took weeks to acknowledge it? The only reason why Google responded so quickly to this is because the super fund manages over $100b and could sue the absolute fuck out of Google.
This happened to me years ago. Suddenly got a random community guidelines violation on YouTube for a 3 second VFX shot that was not pornographic or violent and that I owned all the rights to. After that my whole Google account was locked down. I never found out what triggered this response and I could never resolve the issue with them since I only ever got automated responses. Fuck Google.
This sort of story is what made me switch away from Google Fi and ultimately mostly degoogling. Privacy was a big part later on, but initially it was realizing that a YouTube comment or a file in my drive could get my cell service turned off.
“This is an isolated, ‘one-of-a-kind occurrence’ that has never before occurred with any of Google Cloud’s clients globally. This should not have happened.
I don't believe this is what that rare, what I believe is that this was the fist time it happened to a company with enough exposure to actually have in impact and reach the media.
Either way Google's image won't ever recover from this and they just lost what small credibility they had on the cloud space and won't be even considered again by any institution in the financial market - you know the people with the big money - and there's no coming back from this.
It has 100% happened before and just never been admitted to. I have both 1st hand dealt with the aftermath and heard from other smaller companies about it. I work at medium sized MSP and disaster recovery is in my wheelhouse.
For large businesses, you essentially have two ways to spend money:
OPEX: "operational expenditure" - this is money that you send on an ongoing basis, things like rent, wages, the 3rd party cleaning company, cloud services etc. The expectation is that when you use OPEX, the money disappears off the books and you don't get a tangible thing back in return. Most departments will have an OPEX budget to spend for the year.
CAPEX: "capital expenditure" - buying physical stuff, things like buildings, stock, machinery and servers. When you buy a physical thing, it gets listed as an asset on the company accounts, usually being "worth" whatever you paid for it. The problem is that things tend to lose value over time (with the exception of property), so when you buy a thing the accountants will want to know a depreciation rate - how much value it will lose per year. For computer equipment, this is typically ~20%, being "worthless" in 5 years. Departments typically don't have a big CAPEX budget, and big purchases typically need to be approved by the company board.
This leaves companies in a slightly odd spot where from an accounting standpoint, it might look better on the books to spend $3 million/year on cloud stuff than $10 million every 5 years on servers
Excellent explanation, however, technically it does not constitute an "odd spot." Rather, it represents a "100% acceptable and evident position" as it brings benefits to all stakeholders, from accounting to the CEO. Moreover, it is noteworthy that investing in services or leasing arrangements increases expenditure, resulting in reduced tax liabilities due to lower reported profits. Compounding this, the prevailing high turnover rate among CEOs diminishes incentives for making significant long-term investments.
In certain instances, there is also plain corruption. This occurs when a supplier offering services such as computer and server leasing or software, as well as company car rentals, is owned by a friend or family member of a C-level executive.
If you are a small company then yes. But i would argue that for larger companies this doesn't hold true. If you have 200 employees you'll need an IT department either way. You need IT expertise either way. So having some people who know how to plan, implement and maintain physical hardware makes sense too.
There is a breaking point between economics of scale and the added efforts to coordinate between your company and the service provider plus paying that service providers overhead and profits.
It's absolutely not. If you are at any kind of scale whatsoever, your yearly spend will be a minimum of 2x at a cloud provider rather then creating and operating the same system locally including all the employees, contracts, etc.
G Suite is a legitimate option for small-medium businesses. It's seen as the cheaper, simpler option versus Azure. I usually recommend it for nonprofits as they have a decent free option for 501c3 orgs.
While UniSuper normally has duplication in place in two geographies, to ensure that if one service goes down or is lost then it can be easily restored, because the fund’s cloud subscription was deleted, it caused the deletion across both geographies.
TFW your BCDR gets disastered.
Also "massive misconfiguration" is the "spontaneous disassembly" of cloud computing. i'm sure it's mutiple systems are misconfigured causing chaos but it sounds hilarious.
Geo redundancy isn't BCDR. They did have a backup in another cloud provider, which is smart for this exact reason. If Google deletes all your shit, and you don't have a backup outside Google, you are well and truly fucked.
Just an FYI in case you don't follow Cloud news but Google has deleted customers accounts on multiple occasions and has been for literal years. This time they just did it to someone large enough to make the news. I work in SRE and no longer recommend GCP to anyone.
More than half a million UniSuper fund members went a week with no access to their superannuation accounts after a “one-of-a-kind” Google Cloud “misconfiguration” led to the financial services provider’s private cloud account being deleted, Google and UniSuper have revealed.
Services began being restored for UniSuper customers on Thursday, more than a week after the system went offline.
Investment account balances would reflect last week’s figures and UniSuper said those would be updated as quickly as possible.
In an extraordinary joint statement from Chun and the global CEO for Google Cloud, Thomas Kurian, the pair apologised to members for the outage, and said it had been “extremely frustrating and disappointing”.
“These backups have minimised data loss, and significantly improved the ability of UniSuper and Google Cloud to complete the restoration,” the pair said.
“Restoring UniSuper’s Private Cloud instance has called for an incredible amount of focus, effort, and partnership between our teams to enable an extensive recovery of all the core systems.
The original article contains 412 words, the summary contains 162 words. Saved 61%. I'm a bot and I'm open source!