- onyxravenMakes me wonder if ScreamTracker3 is doable too.
- Its an over-the-top, funny rant in prototypical UK style on the state of outsourced recruiting - not in house recruiting (which has its own, different issues).
A lot of the behavior mentioned here is awfully familiar in the US.
- The blog article is a bit more descriptive https://aws.amazon.com/blogs/aws/new-aws-waf/
- Same, though just as I write this we see another spike of errors.
- a note: the image serving frontend and storage backend for the original twitter photos integration was written largely in openresty. I love it.
- The initial descriptions sound more like someone pointed a testing tool at the wrong environment, rather than a hack.
- Fun. From everything noted about Slack, thats all they've got as their 'core' as well (PHP/MySQL I think, with maybe nodejs websockets).
- Sendgrid recently added MFA to their logins. Also, they recently added 'multiuser' logins (different than subusers) which allow for a separate login/password for api, smtp and web under a single account. This is a nice addition to be able to rotate credentials without downtimes, though as they state in the blog, even better 'api keys' should be coming soon.
That being said, is it odd that the blog says "(salted and iteratively hashed)" passwords? Does that indicate a homegrown scheme that has unknown properties?
- It seems like a logical extension/simplification of Redis for a common usecase, as he said.
I'll be interested in seeing comparisons vs other mq systems.
- Amazon already updated their ELB policies to disable RC4
- It'd be awesome to get 8.x support for working with quick Redshift queries. As is, there isn't a graphical interface to Redshift which is a bummer.
- And Chet Faker: http://www.amazon.com/Thinking-textures-Chet-Faker/dp/B009WJ...
I think the trick is: search as normal for music, then click the 'Prime Eligible' checkbox. It didn't show up otherwise.
- My first deploy at a once-top-10 photo hosting site as a developer was a change to how the DNS silo resolution worked.
Users were mapped into specific silos to separate out each level of the stack from CDN to storage to db. There was a bit of code executed at the beginning of each request that figured out if a request was on the proper subdomain for the resource being requested.
This was a feature that was always tricky to test, and when I joined the codebase didn't have any real automated tests at all. We were on a deploy schedule of every morning, first thing (or earlier, sometimes as early as 4am local time).
By the time the code made it out to all the servers, the ops team was calling frantically saying the power load on the strips and at the distribution point was near critical.
What happened: the code caused every user (well upwards of millions daily) to enter an infinite redirect, very quickly DoSing our servers. It took a second to realize where the problem was, but I quickly committed the fix and the issue was resolved.
Why it happened: a pretty simple string comparison was being done improperly, the fix was at most 1 line (I can't remember the exact fix). There was no automation, and testing it was difficult enough that we just didn't test it.
What I learned: If its complicated enough to not want to test using a browser, at least always build automation to test your assumptions. Or have some damn tests period. We built a procedure for testing those silos with a real browser as well.
I got a good bit of teasing for nearly burning down the datacenter on my very first code deploy, but ever since, its been assumed that if its your first deploy, you're going to break something. Its a rite of passage.
- Anyone else having trouble connecting to GitHub via SSH from AWS?