- akdev1linb4 it’s actually an intern maintaining a 3000+ line markdown file
- Amazon and BlackBerry both tried the whole “you can upload your same APK to our AppStore approach”.
And well, when’s the last time you used the Amazon android AppStore?
- BlackBerry has been out of the phone business for years now.
They basically sold the brand to TCL iirc
- The Linux kernel cannot be relicensed. Linus does not hold copyright to most code.
- You can do that with AWS if you really want to.
It will cost you a ton.
- /usr/bin vs /bin distinction is not relevant as all major distros have gone usrmerge for years now so /bin == /usr/bin (usually /bin is a symlink)
- For what it’s worth, the recommended way of getting credentials for AWS would be either:
1. Piggyback of your existing auth infra (eg: ActiveDirectory or whatever you already have going on for user auth) 2. Failing that use identity center to create user auth in AWS itself
Either way means that your machine gets temporary credentials only
Alternatively, we could write an AWS CLI helper to store the stuff into the keychain (maybe someone has)
Not to take away from your more general point
We need flatpak for CLI tools
- > It shouldn't take a custom script cleaning up after every special snowflake that decided to use some arbitrarily-named directory in $HOME.
Not to take away from your point but I shall introduce you to systemd-tmpfiles
no scripts needed, it can clean up for you if you keep a list of directories/files to clean up
- You could try flatpak Firefox, if that works for you then it takes care of that
- Seems that it will be maintained by those using it (eg: companies and hobbyists alike)
U-boot will not die from this
- Devices with internal batteries basically have embedded expiration dates.
Standard AAA or AA can be rechargeable so you don’t need to keep buying more. I’d suggest buying like a 100 pack or something, they’re not expensive.
- I literally know an engineer that works on the storage layer for R2 and even he wouldn’t agree that it is better than S3
He wouldn’t disclose any details to me but from point of view S3 was best in class
- on the contrary it seems like title deflation as Amazon principal engineers typically work at a higher level than staff at most other orgs (at least I remember a Microsoft principal would be basically an Amazon L5-6 level)
- L6 is a senior engineer - typically effecting change and setting direction within a development team (or small group of related teams)
L7 is a principal engineer - typically effecting change at org level which will impact many teams
- > you can keep overworking yourself as serf on fiefdoms in which I might own shares, and increase the value of my portfolio for me, so I can draw even more passive income every month
Yeah, I guarantee you that the guy who worked at Amazon as a Principal Engineer for 10 years has a bigger portfolio than you.
levels.fyi says $967k/yr average compensation at that level.
- It gets really complex in this workflow to even achieve something like “file coprocessor successfully” on the client side with this approach
how will your client know if you backend lambda crashed or whatever? All it knows is the upload to s3 succeeded
Basically you’re turning a synchronous process into asynchronous
- > I'm assuming if you're consuming that much YT content you've moved to a premium account or something?
Yes
- Except the iPad battery will last longer and will not kill your phone while you do stuff
>inb4 buy a battery bank
- A lot of people don’t know about compilers, bash scripts and libraries.
- The way to work around this issue is to provide a presigned S3 url
Have the users upload to s3 directly and then they can either POST you what they uploaded or you can find some other means of correlating the input (eg: files in s3 are prefixed with the request id or something)
I agree this is annoying and maybe I’ve been in AWS ecosystem for too long.
However having an API that accepts an unbounded amount of data is a good recipe for DoS attacks, I suppose the 100MB is outdated as internet has gotten faster but eventually we do need some limit