Sorry but I can’t think of another word for it right now. This is mostly just venting but also if anyone has a better way to do it I wouldn’t hate to hear it.
I’m trying to set up a home server for all of our family photos. We’re on our way to de-googling, and part of the impetus for the change is that our Google Drive is almost full.We have a few hundred gigs of photos between us. The problem with trying to download your data from Google is that it will only allow you to do so in a reasonable way through Google takeout. First you have to order it. Then you have to wait anywhere from a few hours to a day or two for Google to “prepare” the download. Then you have one week before the takeout “expires.” That’s one week to the minute from the time of the initial request.
I don’t have some kind of fancy California internet, I just have normal home internet and there is just no way to download a 50gig (or 2 gig) file in one go - there are always intrruptions that require restarting the download. But if you try to download the files too many times, Google will give you another error and you have to start over and request a new takeout. Google doesn’t let you download the entire archive either, you have to select each file part individually.
I can’t tell you how many weeks it’s been that I’ve tried to download all of the files before they expire, or google gives me another error.
It’s called: vendor lock-in.
I have fancy California Internet and the downloads are surprisingly slow and kept slowing down and turning off. It was such a pain to get my data out of takeout.
Have you tried mounting the google drive on your computer and copying the files with your file manager?
From a search, it seems photos are no longer accessible via Google Drive and photos downloaded through the API (such as with Rclone) are not in full resolution and have the EXIF data stripped.
Google really fuck over anyone using Google Photos as a backup.
Yeah, with takeout, there are tools that can reconstruct the metadata. I think Google includes some JSONs or something like that. It’s critical to maintain the dates of the photos.
Also I think if I did that I would need double the storage, right? To sync the drive and to copy the files?
Yeah, they really want to keep your data.
Well, obviously they don’t want you to!
Honestly I thought you were going to bitch about them separating your metadata from the photos and you then having to remerge them with a special tool to get them to work with any other program.
immich has a great guide to move a takeout from google into immich
Links or it didn’t happen
Thank you! The goal is to set up immich. It’s my first real foray into self hosting, and it seems close enough to feature parity with Google that the family will go for it. I ran a test with my local photos and it works great, so this is the next step.
Lmao I am both amused and horrified that I had somehow never come across this datapoint before
omg they WHAT
I’m not really looking forward to that step either
It sucked when I closed my accounts years ago. I had to do it manually for the most part.
Im surprised that feature exist tbh. It worked fine for my 20GB splited into 2GB archives if I remember correctly
I used it for my music collection not that long ago and had no issues. The family’s photo library is an order of magnitude larger, so is putting me up against some of the limitations I didn’t run into before
Try this then do them one at the time. You have to start the download in your browser first, but you can click “pause” and leave the browser open as it downloads to your server
Because Google don’t want you to export your photos. They want you to depend on them 100%.
I just have normal home internet and there is just no way to download a 50gig (or 2 gig) file in one go
“Normal” home internet shouldn’t have any problem downloading 50 GB files. I download games larger than this multiple times a week.
Yeah, of course it varies place to place but I think for the majority of at least somewhat developed countries and urban areas in less developed countries 50Mbps is a reasonable figure for “normal home internet” - even at 25Mbps you’re looking at 4½ hours for 50GB which is very doable if you leave it going while you’re at work or just in the background over the course of an evening
Edit: I was curious and looked it up. Global average download is around 50-60Mbps and upload is 10-12Mbps.
they must have dialup or live in the middle of nowhere
That’s fair but also not Google’s fault.
The part that is Google’s fault is that they limit the number of download attempts and the files expire after 1 week. That should be clear form the post.
Well then read it “shitty rural internet.” Use context clues.
Which context clues should I be using to blame your “shitty rural internet” on Google?
The word you’re looking for is “petty.”
You could try using rclone’s Google Photos backend. It’s a command line tool, sort of like rsync but for cloud storage. https://rclone.org/
Looked promising until
When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on “Google Photos” as a backup of your photos. You will not be able to use rclone to redownload original images. You could use ‘google takeout’ to recover the original photos as a last resort
Oh dang, sorry about that. I’ve used rclone with great results (slurping content out of Dropbox, Google Drive, etc.), but I never actually tried the Google Photos backend.
Use Drive or if it’s more than 15GB or whatever the max is these days. Pay for storage for one month for a couple of dollars on one of the supported platforms and download from there.
I know it’s not ideal, but if you can afford it, you could rent a VPS in a cloud provider for a week or two, and do the download from Google Takeout on that, and then use sync or similar to copy the files to your own server.
Use this. It’s finnicky but works for me. You have to start the download on one device, then pause it, copy the command to your file server, then run it. It’s slow and you can only do one at the time, but it’s enough to leave it idling
I was gonna suggest the same.
I don’t know how to do any of that but I know it will help to know anyway. I’ll look into it. Thanks
Instead of having to do an Operating system setup with a cloud provider, maybe another cloud backup service would work. Something like Backblaze can receive your Google files. Then you can download from Backblaze at your leisure.
https://help.goodsync.com/hc/en-us/articles/115003419711-Backblaze-B2
Or use the filters by date to limit the amount of takeout data that’s created? Then repeat with different filters for the next chunk.
Be completely dumb and install a desktop OS like Ubuntu Desktop. Then remote into it, and use the browser just as normal to download the stuff on it. We’ll help you with moving the data off it to your local afterwards. Critically the machine has to have as much storage as needed to store all of your download.
There was an option to split the download into archives of customizable size IIRC
Yeah, that introduces an issue of queuing and monitoring dozens of downloads rather than just a few. I had similar results.
As my family is continuing to add photos over the week, I see no way to verify that previously downloaded parts are identical to the same parts in another takeout. If that makes sense.
You could try a download manager like DownThemAll on Firefox, set a queue with all the links and a depth of 1 download at a time.
DtA has been a godsend when I had shitty ADSL. It splits download in multiple parts and manages to survive micro interruptions in the service