That's a tall claim.
I've been selling a macOS and iOS private LLM app on the App Store for over two years now, that is:
a) is fully native (not electron.js) b) not a llama.cpp / MLX wrapper c) fully sandboxed (none of Jan, Ollama, LM Studio are)
I will not promote. Quite shameless of you to shill your electron.js based llama.cpp wrapper here.
> I accept every challenge to prove that HugstonOne is worth the claim.
I expect your review.
I’ll remind you,
> If you looking for privacy there is only 1 app in the whole wide internet right now, HugstonOne (I challenge everyone to find another local GUI with that privacy).
Heck, if you look at the original comment, it clearly states it’s macOS and iOS native,
> I've been selling a macOS and iOS private LLM app on the App Store for over two years now, that is: > a) is fully native (not electron.js) b) not a llama.cpp / MLX wrapper c) fully sandboxed (none of Jan, Ollama, LM Studio are)
How do you expect it to be and cross platform? Isn’t hugstone windows only?
So, what are your privacy arguments? Don’t move the goal post.
Now for real, I wish to meet more people like you, I admire your professional way of arguing, and I really wish you all the best :)
It's not open source, has no license, runs on Windows only, and requires an activation code to use.
Also, the privacy policy on their website is missing[2].
Anyone remotely concerned about privacy wouldn't come near this thing.
Ah, you're the author, no wonder you're shilling for it.
Great to hear! Since you care so much about privacy, how can I get an activation code without sending any bytes over a network or revealing my email address?
Llama.cpp's built-in web UI.
I tried downloading your app, and it's a whopping 500 MB. What takes up the most disk space? The llama-server binary with the built-in web UI is like a couple MBs.
>the app is a bit heavy as is loading llm models using llama.cpp cli
So it adds an unnecessary overhead of reloading all the weights to VRAM on each message? On some larger models it can take up to a minute. Or you somehow stream input/output from an attached CLI process without restarting it?
What in the world are you trying to say here? llama.cpp can run completely locally and web access can be limited to localhost only. That's entirely private and offline (after downloading a model). I can't tell if you're spreading FUD about llama.cpp or are just generally misinformed about how it works. You certainly have some motivated reasoning trying to promote your app which makes your replies seem very disingenuous.