Thank you to Amazon for S3.
Thank you to Dropbox for hosting large design assets and important files.
Thank you to Hoefler & Frere-Jones for their typography, which we're using on this very blog.
Thank you to Typekit for serving up fonts for us and our clients.
…they're venturing into terra incognita — just think of every smartphone that came before the Galaxy S3 — they need help. As Theput it in their review of the Galaxy Gear:
As with industrial design, software engineering isn't among's strengths, and the results on the Gear are a painful mix of unreliability and inadequacy.
Rails Assets - between and Bower to make it easy to pull Bower components into projects.
Why You Should Never Use MongoDB - A look based on experience with the project.
Ruby on Rails' Inside: - A look at some Rails internals. and Rack
Duplicity + S3: easy, cheap, encrypted, automated full-disk backups for your servers - Definitely worth thinking about.
On the storage side, Amazon has consistently lowered the prices of S3 over the past few years. The current price for the US-west-2 region is only $ 0.09 per GB per month.
Bandwidth costs have also lowered tremendously. Many hosting providers these days allow more than 1 TB of traffic per month per server.
This makes Duplicity and S3 the perfect combination for backing up my servers. Using encryption …
…EBS-Optimized IO throughput of your c1.xlarge cluster? How about the size limit of an S3 object on a single PUT? awsnow.info is the answer to all of your AWS-resource metadata questions. Interested in integrating awsnow.info with your application? You're in luck. There's now a REST API , as well!
Note: These are default soft limits and will vary by account.
2) Tame your S3 buckets
Delete an entire S3 bucket with a singlecommand:
(cross posted from the HPCloud Blog . With 75% more typos!)
One of the most basic problems with systems that need to persist data, is making sure that you can recover those systems in the case of a critical error. I've used and written backup systems for more time then I'd like to admit ( for example ). With the advent of cloud storage systems such as S3, moving your data offsite has become much easier, and much easier to recover data from your offsite storage system.
…took about an hour from start to finish. Compare to day or longer of fiddling withimage upload and cropping plugins, and the back-end to support those. .
Then there's pricing. The starter pack is $ 5/month for 1.5GB of storage and 5GB of bandwidth. Or you can use your own S3 bucket, and get 5,000 image uploads for the same price.
In short, a fantastic image uploading service. Go check it out .
…such open-source project that performs continuous, automatic archiving of WAL files to S3 across our entire fleet of databases. Initially developed by our resident tuple groomer Daniel Farina , you can now find it on GitHub . WAL-E is quickly becoming the default option for those running on top of , with companies like Instagram using it to perform point-in-time restorations and quickly bootstrap a new read-replica or failover slave …
On every run WAD tries to fetch the bundle from a configured S3 bucket. When the bundle hasn't been cached yet it calls Bundler and creates a tarball of the .bundle directory. After installing the bundle WAD pushes the tarball to S3. On every next run the tarball is downloaded and unpacked.
Time saved by using WAD depends on the amount …
…haven't even bothered creating an error page yet. I've only overridden the 403 errors. 403 is S3's default answer to all files who are either not there or not public.
1 2 3 4 5 6 7 8 9 10 11 <RoutingRules> <RoutingRule> <Condition> <HttpErrorCodeReturnedEquals> 403 </HttpErrorCodeReturnedEquals> </Condition> <Redirect> <ReplaceKeyWith> ?not-found </ReplaceKeyWith> <HttpRedirectCode> 302 </HttpRedirectCode> </Redirect> …