Today, my focus is on personalizing the internet at Kenobi.ai (YC W22)
Reach me at my name (Rory) and my company’s domain.
- sarreph parentI really don't think this should be a registry-level issue. As in, the friction shouldn't be introduced into _publishing_ workflows, it should be introduced into _subscription_ workflows where there is an easy fix. Just stop supporting auto-update (through wildcard patch or minor versions) by default... Make the default behaviour to install whatever version you load at install time (like `npm ci` does)
- We think that in commercial buying there will still be a place for “discovery”, where B2B visitors gain from being able to independently digest public facing materials themselves. And that this will mean the adoption of agentic browsing is slower than people think.
However, we did already start experimenting with the agentic browsers like Atlas and Strawberry — I built a PoC for the former. But this is still very much experimental!
Edit:
Just wanted to add that your question is a prescient one and it is something we get asked a lot by investors, VCs etc, but hardly ever by people who run businesses with websites, or the people who visit them / do commercial buying.
- So right now (what we’re demoing), we do “on demand” personalisation, so there isn’t an SEO angle there really. However, we started with pre-rendering changes onto hardcoded URLs and while that did affect content, we didn’t see any SEO issues come up since these URLs were being used in campaigns only.
- Thank you. Accessibility came up in another comment and to be honest we've only thought in terms of _preserving_ accessibility so far, not _improving_ it (as you're suggesting) -- would love to see if we could explore something along these lines soon even though right now we're focussed on B2B...
- > if the buyer enters their company and the copy just changes into what we think they want, are they going to lose trust that the copy is a true representation of our focus?
Great point. That's pertinent to how we've been configuring the research → computing "intent" pipelines. Our focus right now is mainly just to streamline content and show brands "in context" as much as possible without having too much of an "opinion".
Your idea about showing how specifically the company could be helped + a use-case is lovely way of putting some of the more complex layout generation ideas we've been working on!
- Thank you!
> many people with a business profile that would certainly click a "just show me what I care about" button
You've encapsulated better than I did the kind of visitor segment we're building for right now.
> I do things like this every day with AI, it feels rather trivial to me.
I also agree with this sentiment. It's how I started ("surely it can't be hard to jiggle a page around now with LLMs") and it mostly worked! But the edge cases and heuristics are also proving to be a big chunk of the effort "iceberg" :D
- Appreciate the candid feedback!
The number one request we have is to integrate deanonymisation, so you’re right on the money there. That’s coming in the next couple of weeks or so…
Wrt the changes being text-based for now, we do actually have image and complex element and layout generation working, but have kept it as an experiment for pre-rendered pages until we are confident we can get it right in most cases. (Some early beta users used Kenobi to send out in some cases thousands of customised landing pages with imagery)
We‘re also starting our on demand product with text only precisely because we want to hear what people think we should be working on next as we are a super small team of three!
- In general the case that we’re building for here is the one where we are able to do two things effectively: “streamlining” content, so that instead of 7 different use-cases on a maze of a brochure site, as a visitor you see the top one or two use case you actually care about, in way more detail.
Second, we want to show B2B visitors’ brands in context. I.e. showing you what it would look like if your company was using the service in question with social proof from your industry peers. We don’t have our image tech in the on-demand demo right now, but companies that we have helped pre-render copies of their site with dynamic images (especially e-commerce brands), found higher engagement on their outreach as a result.
- Felix handles our personalisation pipeline so will let him chime-in. We do some fast-scraping (cached where possible) to understand the host site, and pull in as much extra page data as possible. We also use other (firmographic) sources like Apollo.io.
Neat idea to track the technologies they've bought _recently_ though! I think capturing buying signals (and inferring intent that way) would be a neat addition to the pipeline!
- Fair criticism! We actually got funded for a different idea (this was a pivot) so perhaps you're onto something.... Jokes aside -- and to address your points.... Privacy: Visitors currently opt-in and provide only the company they are from (no other PII). Security: Our current version handles only text (we have images + dynamic content in the works), but we do our best to build a sanitisation pipeline around this, it's the same kind of problem that any arbitrary-code widget builder needs to account for. Accessibility: I'm keen to hear what concerns or ideas you have here since we are interpolating content and copying existing structure (so that a11y is kept as intact as possible). Advertising: Companies choose to integrate this themselves on their own website, so if anything we see it as forming part of their marketing strategy.
- 46 points
- 7 points
- Kenobi / https://kenobi.ai
I’m working on a way to personalize the content of your website to any visitor - with minimal setup (it’s just a script tag). We’ve just launched so if anyone wants to try it and reach out my email is in my profile!
- We’re working on something adjacent to this[0] by making fluid UIs for public (marketing landing content) front ends. AI allows even compiled code to be arbitrarily modified on the fly, and it’s only going to get easier to start with a “base” of content, functionality, and components - and compose the best outcome for a user.
[0] - https://kenobi.ai
- V1 felt polished to a degree that implied the developers had thought a lot about how their product should provide a compelling user experience. It was also very performant and rarely crashed.
V2 was buggy from the off -- for me -- and crashed frequently. It felt palpably slower and the changes to the featureset IMO were perfunctory (I don't have concrete examples to mind but I remember feeling that way at the time).
- After the V2 suite was released a few years ago, I realised I would never get the "old" Affinity product experience back -- the same experience and price-point that made me a great and productive self-taught illustrator / designer.
C'est la vie, all good things must come to an end. I'm glad the original team made it out with a financial reward (from Canva sale)...
Time for someone else to pick up the mantle! [and for everyone else to stop moaning]
- I've been thinking a lot recently about how much we'd be able to model the human existence as a foundation model (or multiple models representing each core part of the brain) hooked up to a load of sensors (such as 'optic nerve feed', 'temperature', 'cortisol levels') as input and as a response to tool calls -- and have all of this stream out as structured output controlling speech, motor movement, and other physiological functions.
I don't know if anyone is working on modelling the human existence (via LMMs) in this way... It feels like a Frankensteinian and eccentric project idea, but certainly a fun one!