- corndogehttps://hippocraticai.com/
- I think offering client side transcode as an option, with server side transcode available for those who don't want to do it client side, is compelling. I would probably do it, as I have a powerful home system that can transcode much faster than my cloud host (I do use the remote transcoding feature in Peertube though).
- I'm using B2, so maybe that's it. I have the instance configured to serve video directly from B2 rather than proxying it. Peertube has no facilities to keep the first few mb of each video local to the server that I am aware of.
- I think GP meant making the user perform transcoding at upload time
- If you scale an instance you need to use object storage (s3/b2/etc). Fetch from object storage can occasionally have latency spikes.
5 seconds is somewhat exaggerating, I clicked through 10 or so videos on my instance to check and it's 2-3 seconds most of the time.
- > Would it change the equation, meaningfully, if you didn't offer any transcoding on the server and required users to run any transcoding they needed on their own hardware?
I think the user experience would be quite poor, enough that nobody would use the instance. As an example a 4k video will transcoded at least 2 times, to 1080p and 720p, and depending on server config often several more times. Each transcode job takes a long time, even with substantial hwaccel on a desktop.
Very high bitrate video is quite common now since most phones, action cameras etc are capable of 4k30 and often 4k60.
> Do you think a general user couldn't handle the workload (mobile processing, battery, etc), or would that be fairly reasonable for a modern device and only onerous.
If I had to guess, I would expect it be a poor experience. Say I take a 5 minute video, that's probably around 3-5gb. I upload it, then need to wait - in the foreground - for this video to be transcoded and then uploaded to object storage 3 times on a phone chip. People won't do it.
I do like the idea of offloading transcode to users. I wonder if it might be suited for something like https://rendernetwork.com/ where users exchange idle compute to a transcode pool for upload & storage rights, and still get to fire-and-forget uploads?
- By "my own stuff" I mean that I use my instance to upload videos I would otherwise upload to youtube - videos I made that I intend to share with people. The usual reasons for transcoding apply.
- From hosting a peertube instance solely for my own stuff for several years, I've come to appreciate just how difficult self hosting a streaming video platform is. As you say, bandwidth and storage requirements are significant; another less obvious one is transcoding. When a user uploads an HD video file, it needs to be transcoded into lower resolutions if you want there to be a hope of people streaming it. While Peertube itself is perfectly happy running on 2-4 vcpu cores on a cheap cloud vm, if you use those cores to handle transcode jobs it can take huge amounts of time (like 20+ hours) to transcode even medium length 1080p videos. So you really need either a lot of CPU that sits mostly idle, or hardware acceleration, both of which are expensive when purchased from cloud providers. Or you can use remote transcoding to offload transcode jobs onto your home gaming pc or whatever, which works well, but can be complicated and a bit touchy to set up properly, and now you have a point of failure dependent on your home network...
And then, people watching videos are used to the YouTube experience with its world class CDN infra enabling subsecond first frame latencies even for 4k videos. They go on Peertube and first frame takes like 5 seconds for a 1080p video...realistically, with today's attention spans most of them are going to bounce before it ever plays.
- idk man it feels pretty useful to me
- No doubt you were doing a myriad of other things that were worthwhile to you at the time.
- Just to follow up on this, 1 day later:
- It's still wrong
- The website now has a "get premium for $6 first 100 customers only!" banner
Vibe coded trash
- Can you expound a bit on the problem domains? I am curious
- Other states do as well.
- Let me preface this by saying I use passkeys with KeepassXC.
According to WebAuthn, this is not true. Such passkeys are considered "synced passkeys" which are distinct from "device bound" passkeys, which are supposed to be stored in an HSM. WebAuthn allows for an RP to "require" (scare quotes) that the passkey be device bound. Furthermore, the RP can "require" that a specific key store be used. Microsoft enterprise for example requires use of Microsoft Authenticator.
You might ask, how is this enforced? For example, can't KeepassXC simply report that it is a hardware device, or that it is Microsoft Authenticator?
The answer is, there are no mechanisms to enforce this. Yes, KeepassXC can do this. So while you are actually correct that it's possible, the protocol itself pretends that it isn't, which is just one of the many issues with passkeys.
- Yes, PKC authentication is good, but the way passkeys have been implemented is not great. Way too much trust built into the protocol; way too much power granted to relying parties; much harder for users to form a correct mental model.
- > Frameworks are generally more expensive than Macs, sometimes 50% - 100% more expensive for a similar laptop.
Do you have an example? An 8tb m4 macbook pro runs over 7 grand; the comparable hx370 framework 13 is barely over 3 grand. I bought both within the last couple months and found the macs to be significantly more expensive in the segment i was looking at.
- Yes, it's this. I also own an M4 mbp and an AMD framework 13. With both on maximum screen brightness, side by side, doing similar workloads, battery life isn't that much better on the M4. I think the difference maker is that the mac constantly decreases screen brightness when possible, turns the backlight completely off when there isn't any activity, heavily leverages power efficient scheduling and efficiency cores, no doubt turns off power to all peripherals whenever possible, and so on. And of course lid-closed suspend on a mac lasts indefinitely. Arch does none of these things and even on cohesive distros like Fedora there's only so much you can do in user land. Linux is designed for compatibility across a huge breadth of devices; darwin only has to support Mac hardware and can extract every ounce of power efficiency from deep hardware integration.
- When I switched off Android >5 years ago, even then, it was as simple as turning on the hotspot and connecting to it. It was no more cumbersome than any other wifi network. This was with a Pixel device and Linux laptop, and I am sure it works on Windows too.
- Porn is popular!