Preferences

TrueDuality
Joined 2,202 karma

  1. I love the inherent wonder and joy in this post around the original images.
  2. Now THAT is great news
  3. This is a false equivalency I'm surprised no one else has brought up. An archive of a site preserves attribution inherently, the scraping and training are not.
  4. Yeah you're 100% right that it's optional. It's usually only required to allow company data such as email, slack, file sharing etc on your personal device. If you're on-call it is VERY rare for an employee to win a fight on making the company provide a dedicated device for that purpose (which can inherently make it a condition of your job but that's an exception).

    Most employees tend to not care about the why and are happy to just do it making "you" (the one bucking the trend) the oddball. The one not being the team player. It's not legally required, and you won't be fired for it, but its strongly socially encouraged and that makes it mandatory for anyone not willing to put up that fight.

  5. Having a device enrolled in an MDM package does not make it a corporate device. Many corporations require personal devices be managed to support remote wiping. If I install a productivity or developer tool on my personal phone or laptop for personal non-corporate use I would get mistaken as a corporate user by this process.

    If you want to collect this information you should be clear about it and know and understand your edge cases before you start attempting enforcement actions based on it if that is the intent.

    In general in my experience, personal tools are a VERY hard market to sell into for corporate environments (I took a peek at what the software on OPs site requires a commercial license to use). I would bet most if not all of what you're catching here is unauthorized installs in a corporate environment and you're more likely to loose interested users than sell more commercial licenses.

  6. I haven't decided my opinion on this specific license, ones like it, or specifically around rights of training models on content... I think there is a legitimate argument this could apply in regards to making copies and making derivative works of source code and content when it comes to training models. It's still an open question legally as far as I know whether the weights of models are potentially a derivative work and production by models potentially a distribution of the original content. I'm not a lawyer here but it definitely seems like one of the open gray areas.
  7. Another commenter mentioned that this is needed for consistently ordering events, to which I'd add:

    The consistent ordering of events is important when you're working with more than one system. An un-synchronized clock can handle this fine with a single system, it only matters when you're trying to reconcile events with another system.

    This is also a scale problem, when you receive one event per-second a granularity of 1 second may very well be sufficient. If you need to deterministically order 10^9 events across systems consistently you'll want better than nanosecond level precision if you're relying on timestamps for that ordering.

  8. That is also what I came here to find out. Would love to hear from the creators of the project how it compares and contrasts to Talos. We've been running Talos for a few bare-metal and air-gapped cluster deployments with pretty good success but do have some pain-points.
  9. The irony for me is that it's already slow because of the lack of native 64-bit math. I don't care about the memory space available nearly as much.
  10. You don't necessarily need on-package RAM for this. I'm not sure I'd build a project around this, but 16MiB of RAM would hardly be BOM killer.
  11. I write primarily as a means to collect my thoughts and outcomes around projects. I keep analytics on my site not to optimize for any particular audience but because it feels validating and that I'm contributing in another form.

    I still see high traffic on a post explaining oddities in some of Route53's unintuitive behaviors and hope I'm making someone's day a little better in giving them a solution.

    That drives me to write more.

  12. LXC far predates docker regardless of size or impact. It's not disingenuous if you were literally the foundation docker was able to package into a shiny accessible tool.
  13. Remember that under the last reign of the current present, information services were removed from Title II regulation. Biden did vote to restore the net neutrality status last year but that was challenged in court and never went into effect. It was ultimately overturned in January and we're left without net neutrality protections.
  14. How many of the makers of these trash SEO sites are going to voluntarily identify their content as AI generated?
  15. The references I'd direct you to are NIST 800-53r5 controls CM-3 (Configuration Change Control) and CM-4 (Impact Analyses) along with their enhancements, require that configuration changes go through documented approval, security impact analysis, and testing before implementation. A certificate change is unfortunately consider a configuration change to the services.

    Each change needs a documented approval trail. While you can get pre-approval for automated rotations as a class of changes, many auditors interpret the controls conservatively and want to see individual change tickets for each cert rotation, even routine ones.

  16. Speaking as someone who has worked in tightly regulated environment, certificates are kind of a nasty problem and there are a couple of requirements that are in conflict for going to full automation of certificates.

    - Rotation of all certificates and authentication material must be renewed at regular intervals (no conflict here, this is the goal)

    - All infrastructure changes need to have the commands executed and contents of files inspected and approved in writing by the change control board before being applied to the environment

    That explicit approval of any changes being made within the environment go against these being automated in any way shape or form. These boards usually meet monthly or ad-hoc for time-sensitive security updates and usually have very long lists of changes to review causing the agenda to constantly overflow to the next meeting.

    You could probably still make it work as a priority standing agenda idea but its going to still involve manual process and review every month. I wouldn't want to manually rotate and approve certificates every month and many of these requirements have been signed into law (at least in the US).

    Starting to see another round of modernization initiatives so maybe in the next few years something could be done...

  17. Almost every one of those benefits _doesn't_ require anything else. You need one more API endpoint to exchange refresh tokens for bearer token (over a simple static API key) and you get those benefits.
  18. The quick rundown of refresh token I'm referring to is:

    1. Generate your initial refresh token for the user just like you would a random API key. You really don't need to use a JWT, but you could.

    2. The client sends the refresh token to an authentication endpoint. This endpoint validates the token, expires the refresh token and any prior bearer tokens issued to it. The client gets back a new refresh token and a bearer token with an expiration window (lets call it five minutes).

    3. The client uses the bearer token for all requests to your API until it expires

    4. If the client wants to continue using the API, go back to step 2.

    The benefits of that minimal version:

    Client restriction and user behavior steering. With the bearer tokens expiring quickly, and refresh tokens being one-time use it is infeasible to share a single credential between multiple clients. With easy provisioning, this will get users to generate one credential per client.

    Breach containment and blast radius reduction. If your bearer tokens leak (logs being a surprisingly high source for these), they automatically expire when left in backups or deep in the objects of your git repo. If a bearer token is compromised, it's only valid for your expiration window. If a refresh token is compromised and used, the legitimate client will be knocked offline increasing the likelihood of detection. This property also allows you to know if a leaked refresh token was used at all before it was revoked.

    Audit and monitoring opportunities. Every refresh creates a logging checkpoint where you can track usage patterns, detect anomalies, and enforce policy changes. This gives you natural rate limiting and abuse detection points.

    Most security frameworks (SOC 2, ISO 27001, etc.) prefer time-limited credentials as a basic security control.

    Add an expiration time to refresh tokens to naturally clean up access from broken or no longer used clients. Example: Daily backup script. Refresh token's expiration window is 90 days. The backups would have to not run for 90 days before the token was an issue. If it was still needed the effort is low, just provision a new API key. After 90 days of failure you either already needed to perform maintenance on your backup system or you moved to something else without revoking the access keys.

  19. That is a very specific form of refresh token but not the only model. You can just easily have your "API key" be that refresh token. You submit it to an authentication endpoint, get back a new refresh token and a bearer token, and invalidate the previous bearer token if it was still valid. The bearer token will naturally expire and if you're still using it, just use the refresh immediately, if its days or weeks later you can use it then.

    There doesn't need to be any OIDC or third party involved to get all the benefits of them. The keys can't be used by multiple simultaneous clients, they naturally expire and rotate over time, and you can easily audit their use (primarily due to the last two principles).

  20. There are other options that allow long-lived access with naturally rotating keys without OAuth and only a tiny amount of complexity increase that can be managed by a bash script. The refresh token/bearer token combo is pretty powerful and has MUCH stronger security properties than a bare API key.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal