I had it build the game, opting for a single room at first to see if that worked.
Then I had it add multiple rooms on a different git branch in case that didn't work. It worked great.
I learned very little about elixir, phoenix, or deploying to fly.io up to this point, and I already have a nice looking app deployed and running.
I know a lot of devs will hate that this is possible, and it is up to me now to look at the steps it took to create this and really understand what is happening, which are broken down extremely simply for me...
I will do this because I want to learn. I bet a lot of people won't bother to do that. But those people never would have had apps in the first place and now they can. If they are creating fun experiences and not banking apps, I think that is still great.
You guys have been releasing amazing things for years only to be poorly replicated in other languages years later.. but you really outdid yourselves here.
I'm blown away.
edit: is there a way to see how much of my credits were used by building this?
chrismccord
This is amazing on multiple fronts! I reset your usage, so the next round is on us! We shipped credits the day before launch, so usage UI is still TBD, but should be out next week. Thanks for the sharing your experience!
neoecos
Hi Christ, is there any way to get more credits or BYO api key for anthoripic/openai? Im trying to make Kahoot clone and already spend more that 40 in a couple hours.
freedomben
Based on how much they seem to charge (I blew through the $20 initial in like an hour, equivalent use in Claude Code would have been around $3), they're clearly making a pretty big margin on top of the API calls. I doubt they're going to allow BYOK
jonahx
Was the graphic design created from prompts too? It's surprisingly nice, especially considering you spent 45 minutes on it.
colecut
I told it that I wanted a two player tic tac toe game.
it give me a selection of "styles" and I chose neon retro.. I probably could have been more creative and typed in my own suggestion.
Other than that, I said absolutely nothing about how I wanted the layout.
It came up with the idea of listing all active games on the homepage, with the number of players in each, all on its own.
I went from "I want a two player tic tac toe game" to having one, and then added multiple rooms, and deployed it all in under 45 minutes, with little input other than that..
curiouser3
Did you figure out how much credit was used? I want to try this out, but $20 of credit can go quick doing agentic work
colecut
I'm not sure exactly but I think I used nearly all of it.
I've seen others say they went through the full $20 within 45minutes to an hour.
They are supposed to be adding a way to monitor usage soon.
chrismccord
Phoenix creator here. I'm happy to answer any questions about this! Also worth noting that phoenix.new is a global Elixir cluster that spans the planet. If you sign up in Australia, you get an IDE and agent placed in Sydney.
tiffanyh
Amazing work.
Just a clarifying question since I'm confused by the branding use of "Phoenix.new" (since I associate "Phoenix" as a web framework for Elixir apps but this seems to be a lot more than that).
- Is "Phoenix.new" an IDE?
- Is "Phoenix.new" ... AI to help you create an app using the Phoenix web framework for Elixir?
- Does "Phoenix.new" require the app to be hosted/deployed on Fly.io? If that's the case, maybe a naming like "phoenix.flyio.new" would be better and extensible for any type of service Fly.io helps in deployment - Phoenix/Elixir being one)
- Is it all 3 above?
And how does this compare to Tidewave.ai (created as presumably you know, by Elixir creator)
Apologies if I'm possibility conflating topics here.
chrismccord
Yes all 3. It has been weird trying to position/brand this as we started out just going for full-stack Elixir/Phoenix and it became very clear this is already much bigger than a single stack. That said, we wanted to nail a single stack super well to start and the agent is tailored for vibe'd apps atm. I want to introduce a pair mode next for more leveled assistance without having to nag it.
You could absolutely treat phoenix.new as your full dev IDE environment, but I think about it less an IDE, and more a remote runtime where agents get work done that you pop into as needed. Or another way to think about it, the agent doesn't care or need the vscode IDE or xterm. They are purely conveniences for us meaty humans.
For me, something like this is the future of programming. Agents fiddling away and we pop in to see what's going on or work on things they aren't well suited for.
Tidewave is focused on improving your local dev experience while we sit on the infra/remote agent/codex/devin/jules side of the fence. Tidewave also has a MCP server which Phoenix.new could integrate with that runs inside your app itself.
tills13
> For me, something like this is the future of programming. Agents fiddling away and we pop in to see what's going on or work on things they aren't well suited for.
Honestly, this is depressing. Pop in from what? Our factory jobs?
brainless
I understand that we are slowly taking away our own jobs but I do not find it depressing. I do find it concerning since most people do not talk about this openly. We are not sure how we are restructure so many jobs. If we cannot find jobs, what is the financial future for a large number of people across the world. This needs more thinking, honest acceptance of the situation. It will happen, we should take a positive approach to finding a new future.
The Phoenix.new environment includes a headless Chrome browser that our agent knows how to drive. Prompt it to add a front-end feature to your application, and it won’t just sketch the code out and make sure it compiles and lints. It’ll pull the app up itself and poke at the UI, simultaneously looking at the page content, JavaScript state, and server-side logs.
Is it possible to get that headless Chrome browser + agent working locally? With something like Cursor?
tomashubelbauer
Playwright has an MCP server which I believe should be able to give you this.
tyre
When Roo Code uses Claude, it does this while developing. It renders in the sidebar and you can watch it navigate around. Incredibly slow, but that’s only a matter of time.
ewuhic
Does it work with VSCode GitHub Copilot LLM provider? They have Claude in there
troupo
I know it's early days, but here's a must-have wish list for me:
- ability to run locally somehow. I have my own IDE, tools etc. Browser IDEs are definitely not something I use willingly.
- ability to get all code, and deploy it myself, anywhere
---
Edit: forgot to add. I like that every video in Elixir/Phoenix space is the spiritual successor to "15-minute rails blog" from 20 year ago. No marketing bullshit, just people actually using the stuff they build.
chrismccord
You can push and pull code to and from local desktop already: hamburger menu => copy git clone/copy git push.
You could also have it use GitHub and do PRs for a codex/devin style workflows. Running phoenix.new itself locally isn't something we're planning, but opening the runtime for SSH access is high on our list. Then you could do remote ssh access with local vscode or whatever.
For sure. I'm just hesitant to recommend sending one's codebase to a server running code I can't inspect. I suppose that's the status quo with LLM's these days, though.
chrismccord
confirm
chrismccord
"15-minute rails blog" changed the game so I definitely resonate with this. My videos are pretty raw, so happy to hear it works for some folks.
fridder
run locally or in your private cloud would be amazing. The latter bit would be a great paid option for large enterprises
pmarreck
Include optional default email, auth, analytics, job management (you know… the one everyone uses ::cough:: Oban ::cough::), dev/staging/prod modes (with “deployment” or something akin to CD… I know it’s already in the cloud, but you know what I mean) and some kind of non-ephemeral disk storage, maybe even domain management… and this will slay. Base44 just got bought for $80M for supplying all those, but nothing is as cool as Elixir of course!
These other details that are not “just coding” are always the biggest actual impediments to “showing your work”. Thanks for making this!! Somehow I am only just discovering it (toddler kid robbing my “learning tech by osmosis” time… a phenomenon I believe you are also currently familiar with, lol)
Snakes3727
Hi just to confirm as I cannot find anything related to security or your use of using submitted code for training purposes. Where is your security policies with regards to that.
mrkurt
We don't do any model training, and only use existing open source or hosted models. Code gets sent to those providers in context windows. They all promise not to train on it, so far.
tptacek
Did I not say it good enough, Kurt?
NinoScript
You said it terribly to be honest
tptacek
Ask some security questions, I'll get you security answers. We're not a model company; we don't "train" anything.
krts-
Is there a transparent way to see credit used/remaining/topped up, and do you have any tips for how you can prompt the agent that might offer more effective use of credits?
The LLM chat taps out but I can't find a remaining balance on the fly.io dashboard to gauge how I'm using it. I _can_ see a total value of purchased top ups, but I'm not clear how much credit was included in the subscription.
It's very addictive (because it is awesome!) but I've topped up a couple of times now on a small project. The amount of work I can get out the agent per top-up does seem to be diminishing quite quickly, presumably as the context size increases.
burnt-resistor
Is there something comparable that works similarly but completely offline with appropriate hardware? Not everywhere has internet or trusts remote execution and data storage.
PS: Why can't I get IEx to have working command-line history and editing? ;-P
mrdoops
Any takeaways on using Fly APIs for provisioning isolated environments? I'm looking into doing something similar to Phoenix.new but for a low-code server-less workflow system.
chrismccord
1 week of work to go from local-only to fly provisioned IDE machines with all the proxying. fly-replay is the unsung hero in this case, that's how we can route the *.phx.run urls to your running dev servers, how we proxy `git push` to phoenix.new to your IDE's git server, and how we frame your app preview within the IDE in a way that works with Safari (cross origin websocket iframes are a no go). We're also doing a bunch of other neat tricks involving object storage, which we'll write about at some point. Feel free to reach out in slack/email if you want to chat more.
randito
Would love to read about some of the techniques for how you accomplished this.
mrdoops
Thanks, I might hit you up when I'm in the weeds of that feature.
miki123211
1. What's your approach to accessibility? Do you test accessibility of the phoenix.new UI? Considering that many people effectively use Phoenix to write front-ends, have you conducted any evals on how accessible those frontends come out?
2. How do you handle 3rd party libraries? Can the agent access library docs somehow? Considering that Elixir is less popular than more mainstream languages, and hence has less training data available, this seems like an important problem to solve.
chickensong
It seems like they're giving you lower level building building blocks here. It's up to the developer to address these things. Instruct the agent to build/test for accessibility, feed it docs via MCP or by other means.
rramon
They use the Daisy UI component library in 1.8+ Phoenix versions which should have basic accessibility baked in.
sho
Watched the Tetris demo of this and it was very impressive. I was particularly surprised how well it seems to work with the brand-new scopes, despite the obvious lack of much prior art. How did you get around this, how much work was the prompt, and are you comfortable sharing it?
beepbooptheory
What is the benefit of this vs. just running your agent of choice in any ole container?
tptacek
The whole post is about that. Not everything is for everybody, so if it doesn't resonate for you, that's totally OK.
beepbooptheory
Oh geez so sorry for the dumb question! I read a lot about the benefits of containerization in general for agents, but thought it might be enlightening/instructive to know what this specific project adds to that (other than the special Elixir-tuned prompting).
But either way I hear you, thanks so much for taking the time to set me straight. It seems like either way you have done some visionary things here and you should be content with your good work! This stuff does not work for me for just circumstantial reasons (too poor), but still always very curious about the stuff coming out!
Again, so sorry. Congrats on the release and hope your day is good.
tptacek
You're fine! Just encouraging people to read Chris's post. :)
This looks amazing! I keep loving Phoenix more the more I use it.
I was curious what the pricing for this is? Is it normal fly pricing for an instance, and is there any AI cost or environment cost?
And can it do multiple projects on different domains?
manmal
It’s $20 per month if you click through, and I haven’t tried it but almost certainly the normal hosting costs will be added on top.
sevenseacat
I've tried it, the $20 of included credits lasted me about 45 minutes
finder83
Thanks, apparently didn't click through enough
Munksgaard
Just tried it out, but it's unclear what the different buttons at the bottom of the chat history does. The rightmost one (cloud with an upwards arrow) seems to do the same as the first?
Munksgaard
I'm also having trouble with getting it to read PDFs from URLs. I got this error:
Do you have a package for calling LLM services we can use? This service is neat, but I don't need another LLM IDE built in Elixir but I COULD really use a way to call LLMs from Elixir.
chrismccord
Req.post to /chat/completions, streaming the tokens through a parser and doing regular elixir messages. It's really not more complicated than that :)
throwawaymaths
even less complicated, just set stream: false in your json :)
arrowsmith
Thanks for everything you do Chris! Keep crushing it.
cosmic_cheese
How tightly coupled to Fly.io are generated apps?
chrismccord
Everything starts as a stock phx.new app which use sqlite by default. Nothing is specific to fly. You should be able to copy the git clone url, paste, cd && mix deps.get && mix phx.server locally and the app will just work.
freedomben
If you're willing to share, is maintaining that modularization the plan going forward? I'm pretty happy to use and pay for this and deploy it to fly, but only as long as I'm not "locked in."
kuatroka
Does it mean I can build and deploy a SQLite based app on fly.io with this approach without using Postgres? If yes, how does the pricing for the permanent storage ( add) needed for SQLite works? Thanks
What LLM(s) is the agent using? Are you fine-tuning the model? Is the agent/model development a proprietary effort?
chrismccord
Currently claude 4 sonnet as the main driver, with a combination of smaller models for certain scenarios
jonator
I'm assuming you're using FLAME?
How do you protect the host Elixir app from the agent shell, runtime, etc
chrismccord
Not using FLAME in this case. The agent runs entirely separately from your apps/IDE/compute. It communicates with and drives your runtime over phoenix channels
jonator
Oh interesting. So how do messages come from the container? Is there a host elixir app that is running the agent env? How does that work?
chrismccord
Yes, elixir app deployed across the planet as a single elixir cluster. We spawn the agents (GenServer's), globally register them, and then the end-user LiveView chat communicates with the agent with regular elixir messages, and the IDE is a phoenix channels client that communicates with and is driven by the agent.
b0a04gl
how are they isolating ai agent state from app-level processes without breaking BEAM's supervision guarantees?
chrismccord
They run on separate machines and your agent just controls the remote runtime when it needs to interact with the system/write/read/etc
b0a04gl
appreciate the clarity, that helps.
quick followup if the agent's running on a separate machine and interacting remotely, how are failure modes handled across the boundary? like if the agent crashes mid-operation or sends a malformed command, does the remote runtime treat it as an external actor or is there a strategy linking both ends for fault recovery or rollback? just trying to understand where the fault tolerance guarantees begin and end across that split.
chrismccord
token auth and re-handshake. Agent is respawned if it's no longer alive, and project index is resynced
b0a04gl(dead)
[dead]
b0a04gl
the ai agent runs inside the same remote runtime as the app. does it share the BEAM vm or run as a port process?
chrismccord
The agent runs outside your IDE instance and controls/communicates with it over Phoenix channels
ativzzz
This is very cool. I think the primary innovation here is twofold:
1. Remote agent - it's a containerized environment where the agent can run loose and do whatever - it doesn't need approval for user tasks because it's in an isolated environment (though it could still accidentally do destructive actions like edit git history). I think this alone is a separate service that needs to be productionized. When I run claude code in my terminal, automatically spin up the agent in an isolated environment (locally or remotely) and have it go wild. Easy to run things in parallel
2. Deep integration with fly. Everyone will be trying to embed AI deep into their product. Instead of having to talk to chatgpt and copy paste output, I should be able to directly interact with whatever product I'm using and interact with my data in the product using tools. In this case, it's deploying my web app
indigodaddy
look into Kasm workspaces.. great way to spin up remote docker-based linux desktops, and works great as an AI dev environment that you can use wherever you happen to be. There is homedir persistence, and package persistence can be achieved via some extra configuration that allows for Brew homedir-based package persistence.
I have recently been working with Google Jules and it has a similar approach. It spins up VMs and goes through tasks given.
It does not handle any infrastructure, so no hosting. It allows me to set multiple small tasks, come back and check, confirm and move forward to see a new branch on GitHub. I open a PR, do my checks (locally if I need to) and merge.
risyachka
>> Remote agent - it's a containerized environment where the agent can run loose and do whatever
How is this innovation?
kasey_junk
Many people have not experienced the async agent workflow yet and fairly the major providers didn’t have offerings for them until a month or two ago.
It’s in fact one of my predictors for if they are going to be enthusiastic about agents or not.
And you wouldn’t think containerization would be a big leap but this stuff is so new and moving so fast that combining them with existing tech can surprise people.
throwaway314155
It's less innovative and more trendy. A lot of the fly integration can be achieved by simply asking claude code to look up the docs for the fly cli tool.
travisgriggs
I’m torn on “is this the future”.
I worked all day on a Phoenix app we’re developing for ag irrigation analysis. Of late, my “let’s see what $20/mo gets you” is Zed with its genetic offerings.
It actually writes very little Elixir code for me. Sometimes, I let it have a go, but mostly I end up rewriting that stuff. Elixir is fun, and using the programming model as intended is enlightening.
What I do direct it to write a lot is a huge amount of the HEEX stuff for me. With an eventual pass over and clean it up for me. I have not memorized all of the nuances of CSS and html. I do not want to. And writing it has got to be the worst syntactic experience in the history of programming. It’s like someone said people lisp was cool; rather than just gobs of nested parentheses, let’s double, nay triple, no quadruple down on that. We’ll bracket all our statements/elements with a PAIR of hard to type characters, and for funsies, we’ll make them out different words in there. And then when it came to years of how to express lists of things, it’s like someone said “gimme a little bit of ini, case insensitivity, etc”. And every year, we’ll publish new spec of new stuff that preserves the old while adding the new. I digress…
I view agentic coding as an indictment on how bad programming has gotten. I’m not saying there wouldn’t be value, but a huge amount of the appeal, is that web tech is like legalese filled with what are probably hidden bugs that are swallowed by browsers in a variety of u predictable ways. What a surprise that we’ve given up and decided the key tools do the probabilistic right thing. It’s not like we had a better chance of being any more correctly precise on our own anyway.
arrowsmith
Ah man I'm really happy to see this and excited to try it out.
As an Elixir enthusiast I've been worried that Elixir would fall behind because the LLMs don't write it as well as they write bigger languages like Python/JS. So I'm really glad to see such active effort to rectify this problem.
We're in safe hands.
acedTrex
LLMs not writing it well might be the biggest current selling point of elixir lol.
matt_s
Selling point from a developer and career perspective for sure. Its also fun to program in and at least for me made me think about solutions differently.
Its a negative point for engineering leaders that are the decision makers on tech stacks as it relates to staffing needs. LLMs not writing it well, developers that know it typically needing higher compensation, a DIY approach to libraries when there aren't any or they were abandoned and haven't kept pace with deprecations/changes, etc.
In the problem space of needing a web framework to build a SaaS, to an engineering leader there are a lot of other better choices on tech stack that work organizationally better (i.e. not comparing tech itself or benchmarks, comparing staffing, ecosystem, etc.) to solve web SaaS business problems.
I don't know where I stand personally since I'm not at the decision maker level, just thought I'd point out the non-programmer thought process I've heard.
zupa-hu
Oh lol I love that angle!
bad_haircut72
This is such a weird meme, Claude crushes elixir especially a fullstack app in liveview
heeton
Yea CC is great with phoenix / liveview. It’s been doing things that teach me new tricks about elixir I didn’t know yet.
ferfumarma
What's a good place to start for a completely naive but interested user?
heeton
Sorry, I only just saw this comment. Feel free to reach out (email in profile) and I’d be happy to chat.
bluehatbrit
This last few weeks I've been going hard on LLMs to put together a new prototype project. I've exclusively been using Claude Sonnet 3.7 within Zed (via github copilot) and it' fantastic.
From time to time it tries to do something a little old-school, but nothing significant really. It's very capable at spitting out entire new features, even in liveview.
Over all the experience has very productive, and at least on-par with my recent work on similar sized python and nextjs applications.
I think because I'm using mostly common and well understood packages it has a big leg up. I also made sure to initialise the phoenix project myself to start with so it didn't try to go off on some weird direction.
indigodaddy
Assuming you are on the $20 Zed plan? Has the 500 prompts/mo been sufficient for you? I'm debating between the Zed and Claude $20 plans-- no doubt I'd get better value from Zed's?
bluehatbrit
I'm on the free plan and have been using it via GitHub copilot instead, as the current project is a work one and they pay for that.
Before this I did a small project and I hit the 50 free tier limit through Zed by the time I was about 90% done. It was a small file drop app where internal users could create upload links, share them with people who could use them to upload a file. The internal user could then download that file. So it was very basic, but it churned out a reasonable UI and all the S3 compatible integration, etc.
I had to intervene a bit and obviously was reviewing everything and tweaking where needed. But I was surprised at how far I got on the 50 free prompts.
It's hard to know what you really get for that prompt limit though as I probably had a much higher number of actual prompts than they were registering. It's obviously using some token calculation under the hood and it's not clear what that is. All in all I probably had about 60-70 actual prompts I ran through it.
My gut says 500/mo would feel limited if I was going full "vibe" and having the LLM do basically everything for me every day. That said, this is the first LLM product I'm considering personally paying for. The integration with Zed is what wins for me over Claude, where you'd have to pay for API credits or use Claude Code. The way they highlight code changes and stuff really is nice.
Bit of a brain dump, sorry about that!
indigodaddy
Thanks, that gives me some new things to think about
jostylr
I have used Zed's plane with Claude and also Claude Code. They are very different experiences. Zed's agent work is very much a set it, go away, review, give some tips to it, iterate. As long as you use the Sonnet and absolutely avoid the burn mode (formerly max mode), it should do a lot of work for you. The main limitation I hit is the context window. As the codebase gets larger, it takes more context for it to get going and then it tends to have a hard time finishing. I find that about 4 prompts works for a feature that would take me a few hours to code.
For Claude Code, the limit is reset every 5 hours so if you hit it, you rest a bit. Not that big a deal to me. But the way it works I find much more stressful. It is reviewing just about everything it is doing. It is step-by-step. Some of it you can just say Yes, do it without permission, but it likes to run shell commands and for obvious reasons arbitrary shell commands need your explicit Yes for each run. This is probably a great flow if you want a lot of control in what it is doing. And the ability to intercede and redirect it is great. But if you want more of a "I just want to get the result and minimize my time and effort" then Zed is probably better for that.
I am also experimenting with OpenAI's codex which is yet a different experience. There it runs on repos and pull requests. I have no idea what their rate/limit stuff will be. I have just started working with it.
Of the three, disregarding cost, I like Zed's experience the best. I also think they are the most transparent. Just make sure never to use the burn mode. That really burns through the credits very quickly for no real discernible reason. But I think it is also limited to either small codebases or prompts that limit what the agent is going through to get up to speed due to the context window being about 120k (it is not 200k as the view seems to suggest).
debian3
Try claude --dangerously-skip-permissions
jostylr
Thanks for the tip. That does work much more like Zed's integration. I used multipass to setup a VM, created a non-admin user, restricted its internet with tinyproxy, mounted the repo I am working on, and I don't worry about the danger. Just have to make sure to ensure the directory mounted is backed up. I do find that I hit the limits and have to wait for it to reset. That is either a good time to take a break or maybe supplement with Zed. Zed has the feature that one can pay for extra prompts. The context window with Claude Code seems less of an issue than in the Zed integration. It also has memory comapctification if necessary though I find most of my feature work finishes before hitting that limit.
indigodaddy
Helpful feedback, thank you!
mrcwinn
Worried it might fall behind… further? I love LiveView, Phoenix, Elixir, OTP. But the ecosystem is a wasteland of abandoned packages.
If Phoenix.new helps solve that problem, I’m all for the effort. But otherwise, the sole focus of the community leaders of Elixir should be squarely and exactly focused on creating the incentives and dynamics to grow the base.
Compare, for example, Mastra in TypeScript or PydanticAI in Python. Elixir? Nothing.
Not here to bash. It’s more just a disappointment because otherwise I think nothing comes close.
uncircle
All languages are a wasteland of abandoned packages, i.e. there is a very long tail of stuff no one has maintained for years. It’s all relative to the mindshare. For its size, Elixir is doing quite well.
mrcwinn
It's not the long tail. It's that the HEAD of packages in Elixir are also often poorly maintained or not maintained. The fundamental question for any developer: can I be productive quickly? Despite all that Elixir has going for it, the answer is often "no."
Want a first-party client library for the service you're using? Typically the answer is "too bad, Elixir developer." And writing your own Finch or Req wrapper for their REST endpoint simply isn't a valid answer.
>For its size, Elixir is doing quite well.
I'm actually arguing the opposite. Elixir is not doing well because of its size. So how can that be influenced and changed?
prophesi
What packages in Elixir have you found unmaintained/missing in the ecosystem? Genuinely curious.
Some languages—Clojure is a good example—have packages from 10 years ago, entirely unmaintained, that still work great because no maintenance is needed.
arrowsmith
This is also true for Elixir though. A lot of "unmaintained" Elixir packages still work fine.
sodapopcan
That's their point (I think, lol).
spiderice
In my experience, Elixir is very much on that end of the spectrum as well. I'm wondering if GGP just considers packages that don't have updates for 6 months as "unmaintained" or "dead" because they come from Javascript world where everything is, well... you know.
This is such a weird thing to say and I see it all of the time. It sucks people have been tricked into thinking a library must be updated every 2 weeks in order to still be relevant.
You think just because an author bumps the version number of a library it's somehow better than a library that is considered complete?
It boggles my mind that people actually think this way.
sodapopcan
You must never have been a Ruby developer. “I notice this library hasn’t been updated in 8 days, is it still being maintained?”
conradfr
Old packages usually still run great in Elixir though.
throwawaymaths
in principle llms should do better on immutable languages since there is no risk a term will get modified by a distant function call.
bevr1337
In my experience, it's the functional part, not immutability, where they fall short. Any LLM can write immutable C# because it's easy and there's incredible amounts of training data.
throwawaymaths
good news, "immutable" is pretty much the only way that elixir is "functional" except for lambdas being first class datatypes (which is almost every language now)
victorbjorklund
I found them to be pretty solid for writing Elixir (not perfect but neither is it with JS) the last couple of months.
nxobject
Yeah – as someone who works in Common Lisp, I wish there was way to do complementary training for LLMs with existing corpora of codebases. Being able to read access documentation doesn't do much, unfortunately, to help with more general issues with correctness of output.
neya
I use o3, it's really good with Elixir. I prefer it to Claude, but Claude does a decent job as well.
If you want to take your website and business down, use ChatGPT-4o's code
pawelduda
Claude 3.5 produces very good Elixir/Phoenix code. Haven't tried 3.7 much but I assume it's only going to get better from here
te_chris
Claude and o3 are both excellent (if a bit erratic) elixir developers.
zorrolisto
Same, I watched a video from Theo where he says Next.js and Python will be the best languages because LLMs know them well, but if the model can infer, it shouldn’t be a problem.
rramon
Folks on YouTube have used Claude Code and the new Tidewave.ai MCP (for Elixir and Rails) to vibe code a live polling app in Cursor without writing a line of code. The 2hr session is on YT.
since models can't reason, as you just pointed out, and need examples to do anything, and the LLM companies are abusing everyone's websites with crawlers, why aren't we generating plausible looking but non working code for the crawlers to gobble, in order to poison them?
I mean seriously, fuck everything about how the data is gathered for these things, and everything that your comment implies about them.
The models cannot infer.
The upside of my salty attitude is that hordes of vibe coders are actively doing what I just suggested -- unknowingly.
fragmede
But the models can run tools, so wouldn't they just run the code, not get the expected output, and then exclude the bad code from their training data?
bee_rider
That seems like a feedback loop that’s unlikely to exist currently. I guess if intentionally plausible but bad data became a really serious problem, the loop could be created… maybe? Although it would be necessary to attribute a bit of code output back to the training data that lead to it.
Imustaskforhelp
For what its worth, AI already has subpar data. Atleast this is what I've heard.
I am not sure, but the cat is out of the box. I don't think we can do anything at this point.
itsautomatisch
Burnt through some credits pretty quickly. The first few minutes of using it felt like a glimpse of what it was supposed to be like, but otherwise it spent a lot of time not getting basic things right with the UI and before I knew it I was done. Roughly 90 minutes for $20 it's not a good value when you can ostensibly have the same experience on your computer and have control over every aspect. I still can't clone the latest revised version of the codebase it created to my local computer. Between that and Fly's non-existent documentation, no usage meter of some kind, and the lack of unpaid support (even though I am paying to use the service?) makes me want to avoid Fly, which is unfortunate because I think it does a lot of things right, especially the tunneling and dev experience outside of phoenix.new.
krainboltgreene
I've wasted a lot of time and energy on stuff that doesn't matter, so I can hardly judge anyone else on what they focus on, but man does it feel bad to have community leaders actively focus on building out tooling that is anti-worker. I think the only way I'd feel more conflicted is if Fly.io started building weapons systems for the military. I guess that wouldn't be shocking considering some of their lead's beliefs.
mrkurt
It's safe to say that if either Chris or I believed this to be anti worker, we wouldn't be working on it. He's spent the last 10+ years working on Phoenix specifically to improve the lives of the people doing the work.
My experience with software development is maybe different than yours. There's a massive amount of not-yet-built software that can improve peoples' lives, even in teeny tiny ways. Like 99.999% of what should exist, doesn't.
Building things faster with LLMs makes me more capable. It (so far) has not taken work away from the people I work with. It has made them more capable. We can all build better tools, and faster than we did 12 months ago.
Automation is disruptive to peoples' lives. I get that. It decreases the value of some hard earned skills. Developer automation, in my life at least, has also increased the value of other peoples' skills. I don't believe it's anti worker to build more tools for builders.
krainboltgreene
> There's a massive amount of not-yet-built software that can improve peoples' lives, even in teeny tiny ways. Like 99.999% of what should exist, doesn't.
We agree on this completely, however you and I know there are plenty of people without jobs in the world who could be employed to do this work. You are spending your finite amount of time on earth working with services that are trying to squeeze the job market (they've said this openly) rather than spending it increasing the welfare of workers by giving them work.
> Automation is disruptive to peoples' lives.
You know the difference between automation and the goals of these companies. You know that they don't want to make looms that increase the productivity of workers, they want to replace the worker so they never have to pay wages again.
tptacek
rather than spending it increasing the welfare of workers by giving them work.
Saying the quiet part loud here.
jonator
Exactly. And as more software is written, the demand and possible set of softwares increase.
It's really a matter of positive sum/growth mindset vs scarcity/status quo mindset.
hooverd
Maybe, but it's zero sum to the people with money. We saw wage growth in the lowest end during the Biden admin and it drove the bosses insane.
yunwal
The same logic that would lead one to believe that AI is anti-worker should also lead one to believe that software as a whole is anti-worker.
krainboltgreene
Sure, if you don't think about it at all.
mwcampbell
The argument you're responding to is effective enough, based solely on the fact that it has led me to second-guess whether I chose the right line of work, that it would be worth expounding on what you think is wrong with it.
revenant718
I am inclined to agree with you. Card-carrying socialist and all that. But I wonder if you could share a good-faith rebuttal of this point.
It's more than evident that software has automated away all kinds of wage labor from the aforementioned typist pools to Hollywood special effects model-makers.
What's different now is that it is actually the software creators’ labor that is in danger of automation (I think this is easily overstated but it is obviously true to some degree).
I get that it feels different for us now that OUR ox is the one being gored. And I do think there will be no end of negative externalities from the turn towards AI. But none of that refutes the above respondent's point?
krainboltgreene
A few things:
1. Typists are still around and so are special effects model-makers.
2. People who program aren't in danger of automation.
3. These services are entirely unsustainable, they will absolutely not last at their current pace.
The premise of this entire work, detailed by the creator, is to utilize a program to reduce the amount of work a programmer is required to do. They believe ultimately, like most results of improved automation, that this will result in more things we can work on because we have more time. I agree that this would likely be the case! We could also simply make more programmers, could we not? Why haven't we? Do the 18k people homeless in my city tonight not deserve a shot at learning a skill before we even think about making the work easier per person?
Finally, and more to the point, genAI is built by and designed to eliminate workers entirely. The money that goes into those services funds billionaires who seek to completely and totally annihilate the concept of the proletariat. When I make a tool that helps workers at my job do their job better I am not looking to eliminate that person from the company.
jonator
You can think of it as just automating the boring tedious stuff so us humans can focus on the harder problems like strategy, direction, design, GTM, etc.
The days are numbered where humans are sitting typing out code themselves.
It's akin to the numbered days of type writer secretaries of the 20th century.
krainboltgreene
I know what stories workers in the industry are using to cope with working with capitalists that have explicit goals of eliminating workers.
I'm sure your poor understanding of the history of improved tooling, like "type writer secretaries", will be a soft comfort in the future.
jonator
And you don't think capitalism is the reason we have these computer jobs to automate away in the first place?
krainboltgreene
No, because I’ve read a history book.
wturner
Most tech isn't "Anti worker". What determines pro/anti worker are laws and government policies that reciprocate with the cultural norms we adopt. At the moment, money in U.S. politics is the most anti-worker phenomena I can think of. The ultra wealthy have a monopoly on the incentives that create policy and how our lives are ordered. The only power working people seem to have is the ability to impose consequences via rogue guerilla acts of protest and violence (Luigi Mangion) . Hopefully, AI is a Frankenstein monster the public learns to wield to facilitate more of these "consequences" and upend the monopoly the super wealthy have on policy incentives and change the way politics is funded for good. It's a new world and a Hawaiian or New Zealand doomsday bunker isn't going make a difference.
leafmeal
Do you believe making things easier and more accessible is bad for workers? I don't think it inherently is or isn't, it just depends on who benefits from the increased efficiency. I think that's more of a problem with your economic system, or wealth distribution.
Overall I think we would all be happier if efficient machines take away the drudgery of our daily work and allow us to focus on things that really matter to us. . . as long as our basic needs are met.
krainboltgreene
> Do you believe making things easier and more accessible is bad for workers?
Nope, I've been doing it for 16 years.
losvedir
This is very neat, and right up my alley as both someone really into Elixir and who thinks agentic AI is the future.
I have a question about how you manage context, and what model you use. Gemini seems the best at working with give context windows right now, but even that has its limitations. Thinking about working with Claude Code, a fair bit of my strategizing is in breaking down work and managing project state to keep context size manageable.
I'm watching the linked video and it's amazing seeing it in action, but I'm imagining continuing to work on a project and wondering if it will start losing its way so to speak. Can you have it summarize stuff, and can you start a session clean with those summaries, and have it "forget" files it won't need to use for this next feature, etc?
rustc
"Sign in with fly.io" takes me to a page asking me to pay $20 but the plan details are vague - what exactly is included in "$20/mo of Built-In AI Assistance
Builds, refactors, and debugs right in your IDE"?
tptacek
This is a situation where we've been pushing on Chris to get this out into the world quickly, and there's a lot of packaging stuff like this that isn't fully put together yet. Thanks for calling it out! We'll get to it over the next week.
gavmor
Phoenix.new looks powerful, and I'll definitely play with it.
I've been daydreaming of an agentic framework that maximally exploits BEAM. This isn't that, but maybe jido[0] is what I'm looking for.
I'm a little surprised by the sentiment here that LLMs don't do well with Elixir. I've had a pretty good experience using AI tools on Phoenix/Elixir side projects.
arrowsmith
LLMs are definitely a lot better at Elixir than they used to be - the gap has closed somewhat. I still perceive a gap though, especially when trying to do more complicated things in Phoenix and LiveView (as opposed to just raw Elixir.)
Which LLMs do you use that you find are best with Elixir/Phoenix?
CollinEMac
I haven't really used much other than Claude.
I guess I should also note that I haven't really used LiveView much.
vaer-k
I've only used LLMs with Elixir, so I don't have any other experience for comparison, but I've found that although Claude frequently employs the wrong approaches in Elixir, I usually know when he'll have trouble and just ask him to read pertinent documentation first. So long as he's read the manual he seems to do just fine.
jsmo
Hmm, just signed up to check it out but no trial just "$20/mo of Built-In AI Assistance" without any mention of usage limits?
afro88
Same. Agents can be very expensive. Plus I have no idea how reliable or effective this actually is in practice. Would love to try it first.
zombiwoof
it's wild to me all this progress with AI, but at the same time, on my brand new mac, going through the readme to try out phoenix, it can't even get past the first step without an error (says it can't find postgres), yet the docs say I don't need postgres and it will default to sqllite if postgres and mysql can't be found.
hard to put confidence in AI vibe hacks when the basic stuff just doesn't work.
* (Mix) The database for Myapp.Repo couldn't be created: killed
Sytten
I am probably doing something wrong but I hate agents for coding. I like the autocomplete and the prompt to generate snippets but when it starts modifying code in many files so fast all at once, tries a bunch of stuff and never know when to stop it pisses me off more than anything else. Because most the time if it had stopped and let me do the 10% last part it would have been actual legit code.
artirdx
Mindblowing! This is 100x VB6. The generated UIs are beautiful and professional. I am still trying it out and building an app for tracking expenses but it is working very well. The conversational dialogue it has with the developer is just fantastic. I am amazed at how clear the user experience was in Chris' mind. I am not sure what LLM is being used but this is better experience than any LLM. Given this is first version, I look forward to what comes next!
Few issues:
1. The 150 message limit is understable but it suddenly pops up and you lose significant work. I was working on UI mockup and just as I had finished and was ready to go on implementation, this limit appeared and significant part of my work was lost.
2. After the first credit, the credit seems to exhaust pretty fast which makes it expensive, especially when you are trying it out.
3. Also I don't understand when you ask it to prototype different screens, why does it overwrites the same file.
4. It is not able to stop to seek user feedback but keeps trying different approach which kind of exhausts the credit. It would be nice if it describes its approach, so the human developer can provide their feedback.
5. It seems it is using OpenAI because it is often self-congratulatory to the point of being annoying sometimes.
psadri
I love the idea of Phoenix and server side rendering (I happen to work on SkyMass, a related project).
This is a tangential comment and should not detract from what Chris and team have created. I think closing the loop between agent and the running output is a great/critical step forward.
However, I find using AI to build transitional Apps with a UI is a bit like improving the way automobile steering wheels are made. In a world that soon won't need steering wheels at all.
If the AI is so good to write the code for an App, how much longer before you won't need those Apps in the first place? And then the question is, what will fill the role that Apps play today.
liampulles
I think it boils down to: if the AI screws up what I asked it to do, who do I have to hold accountable? If the answer is that there is no one I can hold accountable, because the AI agent I used removes any and all onus of responsibility in its terms of service, then I'm not going to use it for anything non-trivial.
colecut
I don't think we want to move to a world where everything we use is AI driven all the time.
A coded app is significantly more efficient to execute, and more predictable, than dealing with AI in most situations.
psadri
I don't disagree that code is far more efficient than inference. But what are those apps? A lot of apps are fetching some data, massaging it, rendering UI to let the user view / update data. Could the AI do some of that "work" for you, so that you don't need those dashboards / buttons / forms in the first place? Maybe you have an agent that does a db query (instead of a human viewing a dashboard) and takes some action (instead of the human clicking a button).
themgt
This looks really cool, but I gotta say I'm a bit uneasy with the apparent(?) closed-source + hosted + branding. "mix phx.new" is the way to generate a new Phoenix project, but "Phoenix.new" is closed source Fly.io product for building Phoenix projects?
Feels like we're getting into a weird situation if LLM providers are publishing open source agentic coding tools and OSS web app frameworks are publishing closed source/non-BYOK agentic coding tool. I realize this may not be an official "Phoenix" project but it seems analogous to DHH releasing a closed-source/hosted "Rails.new" service.
liampulles
This is very cool! I will say though, my spidey senses kick in when looking at stuff like this, and make me wonder about how much I'm going to get vendored in here. Could I use it develop a site off fly.io? If the answer is no, then I'd say this is cool and I think useful for people who need something quick and simple and out of the box, but not something I would ever use on a serious production project.
loloquwowndueo
The answer is yes. What would you say then?
There’s a “clone Git repo” thing in the left side bar, use that to clone the project locally, mix deps.get, mix phx.serve and you’re up. You can deploy this anywhere you want.
liampulles
If the DB, Auth libraries and such are not tied to fly.io, then yep that's cool. Happy
andy_ppp
It’s just producing Elixir/Phoenix code that is stored in GitHub and can be deployed anywhere.
moffers
The mental models of Elixir/OTP and AI Agents are very compatible. I’ve felt for a long time that it would be one of the best platforms for building AI agents.
zupa-hu
Would you elaborate why?
indigodaddy
So is Phoenix.new a Fly.io product, or just under the fly umbrella? Also, is pricing clearly laid out anywhere (including what the additional costs are for permanently deployed/hosted services that arise from Phoenix) ? Didn't dig too hard admittedly, but wasn't obvious where to find or look for pricing information on the front page on mobile
unvs
Any chance of open sourcing the model instructions for this? Do you feed it all the Phoenix/LiveView/Elixir docs, or have you written more specialised instructions?
I find Claude to have quite a bit of problems trying to navigate changesets + forms + streams in my codebase, just wondered if you had any tips of making it understand better :)
aytigra
A bit sad that the language and framework so enjoyable to write an read will be mostly hidden in a coding box.
And thinking about it made me realize that soon there will be a completely different programming language used solely by coding agents.
ChatGPT gives an interesting take on this, "The fundamental shift is that such a language wouldn’t be written or read, but reasoned about and generated. It would be more like an interlingua between symbolic goals and executable semantics, verbose, unambiguous, self-modifying, auto-verifiable, evolving alongside the agents that use it").
troad
This makes me very uneasy. Not in what it is per se, but in what it shows about the direction of Phoenix.
I've been working with Phoenix a lot the last few months, and I like it a lot. But I do get the sense that the project suffers from wanting to perpetually chase the next new thing, even when that comes at the expense of the functional elegance and conceptual cohesiveness that I think is Phoenix' main strength.
LiveView is a great example. It's a very neat bit of tech, but it's shoe-horned surprisingly awkwardly into Phoenix. There's now a live view and non-live view way to do almost everything in Phoenix, and each has their own different foibles and edge cases. A lot of code needs to work with both (e.g. auth needs to happen at both levels, basically), meaning a surprising amount of code needs to have two, nearly identical variants: one with traditional Plug idioms, and then another using LiveView equivalents. Quick little view helpers end up with either convoluted 'what mode am I in?' branching, or (more likely) in view-mode-dependent wrappers around view-mode-independent abstractions. This touches even the simplest helpers (what is the current path?) and becomes more cumbersome from there. (And given the lack of static analysis for views, it can be non-trivial to even find out what is and isn't actually working where.)
Not every website should be a live view (e.g. hiking directions, for example), but that is clearly the direction of travel in Phoenix. Non-live views get the disparaging moniker 'dead views', and the old Phoenix.HTML helpers have been depreciated in favour of <.form />-style live components. The generators depend on those, plus Tailwind, Hero Icons and (soon) DaisyUI, all fetched live from various places on the Internet on build. This tight coupling to trendy dependencies will age poorly, and it makes for bumpy on-boarding (opinionated and tightly coupled isn't necessarily a smoother experience, just a more inflexible one).
So with all of that in mind, while I'm not shocked to see Phoenix jump on the vibe coding hype train, I guess I am disappointed.
The revelation that AI is now writing PRs for Phoenix itself is not confidence inspiring. I rely on frameworks like Phoenix because I don't want to have to think about the core abstractions around getting a website to my users; I want to focus on my business logic. But implicit in that choice is the assumption that someone is thinking about those things. If it's AI pushing out Phoenix updates now, my trust level and willingness to rely on this code drops dramatically. I also do not expect Phoenix' fraying conceptual cohesiveness to get any better if that's the way we're headed.
Phoenix is still an amazing piece of tech, but I wish I felt more at ease about its future trajectory.
ferfumarma
I love everything I read about Phoenix. It's definitely the framework I'm going to use the next time I need to scratch an itch.
Having a full stack that is easy to use as a learning sandbox is incredibly helpful in that regard, so this looks amazing.
krts-
This is incredible. It does seem quite expensive compared to Zed or Claude Code now it's on Pro. But neat enough I've burned through the $20 subscription credit despite being a bit of an AI sceptic. This seems to have a much better handle on UI design (unless I'm missing something with the other agents), but as a solo dev I'm becoming quite convinced. It's also got me to try out fly again.
I couldn't get Tidewave working but I must try again to see if Tidewave with Claude Code would offer this level of awesome.
ps. @fly - please let me buy more credit, I just get an error!
chrismccord
Thanks for the feedback! Send your fly email to chris@fly.io and I'll get things sorted out. We'll throw you some credits for the trouble :)
krts-
Hero.
toolhouseAI
Hey @chrismccord, very confused but this is a collab between you and the FLY.IO project right? Like I can't eject the app from this and run it myself? This isn't an open source Phoenix project?
olafura
You can just use git to clone the code
ipnon
Chris is a hacker’s hacker.
nixpulvis
A hacker's hacker who charges for closed source AI tools which work in hosted environments only. Yea, right.
abrookewood
Come on, devs need to eat. Companies need cash to employ people. After all that he has done for open source, you're complaining that something isn't free? You know you can export the code at any time right?
nixpulvis
I'm not saying he's a bad dev, just hardly fits the hacker ethos.
We want things we can tinker and toy with from the inside.
qudat
Technically this is really. Practically I don’t understand the point. Who is the target demographic? People that don’t have a local dev environment?
abrookewood
It's not really about the remote IDE. It's about an integrated environment where the agent can do everything it needs (install OS packages, inspect the code, view the app via a browser etc) in order to build what you want. It could end up being the place you 'start' an app, before exporting it and fleshing out the features locally.
qudat
So the primary benefit over any other AI agent IDEs is it has access to a browser?
Ya I really am not the target demographic for this since I don't use AI agents in my IDE anyway.
It does seem to perfectly fit fly.io in the sense that I also don't care about "edge" apps.
nilirl
Beautiful demo! How do I build agents like this?
Does anyone know any great resources to learn how to design agents?
Tool agnostic resources would be awesome.
chrismccord
Thanks! Everything is overly complicated in this space. It's probably far easier than you think. The open secret is it's just a loop that POST [provider]/chat/completions.
Elixir is particularly well suited here. In Elixir this is a genserver doing http posts and reacting to the token stream. The LiveView chat gets messages from the genserver agent regardless of where it is on the planet, and the agent also communicates with the phoenix channel websocket talking to the IDE machines with regular messages, again anywhere they are on the planet.
Wow, thank you for the link, it clarified how tool calling and "choice-making" works.
It's like helping LLMs use a computer; like building an interface for it.
Ok, this is enough to get me started.
mcdirty
This is great. I had to back out of a phoenix project and rewrite it in Django because I couldn't get good AI assistance. I'm pretty inexperienced with Elixir and Phoenix but understand the benefits enough to want to make projects in it. So this is really cool.
dontlaugh
I find that baffling. Why not just write the Elixir yourself?
prophesi
Not just baffling, but concerning. LLM's are great for learning new languages, but terrible for outputting code you don't understand yet hope to maintain.
abrookewood
Yet, more and more people are going to be doing exactly this.
causal
I had this experience too. Though most of my issues were with my model not necessarily with Elixir itself so much as understanding the Phoenix model, state, and CLI. Maybe even differences in versions? Wasn't always clear.
martypitt
This is really cool! And, that you did it in a few weeks is insane.
How did you get VS Code embedded in your app? I'm aware of projects like Monaco, and that vscode.dev exists - so it's clearly possible - but I didn't realize it was something others could build upon?
Again, kudos!
__0x01
What LLM does Phoenix.new use?
chrismccord
claude 4 sonnet as the main driver atm, and a mix of smaller models depending on the scenario
tptacek
I'm building an agent right now (or rather, extending an agent I build a few weeks ago in a couple hours) and I would love to hear more about which scenarios get assigned to which models.
(I almost just asked you on company Slack but figured the answer would be more broadly interesting.)
ethan_smith
Based on the blog post, Phoenix.new uses Claude 3 Opus as its underlying model, which explains its strong performance with Elixir/Phoenix codebases.
sergiotapia
What model is it using for all the agentic work?
What usage limits do we get with the $20 monthly price?
Thank you!
kuon
I really wish I could move to those nice new editors, but as a vim user I just feel paralyzed when I cannot use vim bindings. And all "emulations" I tried are just incomplete.
timeinput
I'm with you. I'm going to try Zed in the next couple days based on the response to my comment here. https://www.hackerneue.com/item?id=44322560 . I'm stuck with vim. My fingers nearly only work vim, but I managed to move to neovim, which worked. Maybe (something else) can work too.
chrismccord
as an avid vim user who moved to emacs evil-mode for a better vim than vim, and now who uses vscode with vscode-vim, it pains me to admit a web browser based editor is a better vim than vim. Somehow starts faster and is less kludgey and more scriptable. You can install extensions on phoenix.new, so vim is not a blocker for you. I drive vim emulation in it every day both on desktop app and within phoenix.new. Couldn't use it without it :)
kuon
My main issue with vim emulation is that you cannot do file management and windows management.
In my vim workflow I keep splitting/unsplitting windows and I like to have a file browser I can navigate with vim bindings.
loloquwowndueo
You can pull the repo, tweak it locally with Vim, push it back and ask the LLM to work on top of that. No need to use the built in IDE if you don’t like it.
poisonta
Phoenix needs an ActiveRecord-like database abstraction layer. Many Rails developers try Phoenix at some point because they may need better performance. They’re so accustomed to the Rails structure that they assume Rails has done everything right. However, Ecto and ActiveRecord are two very different beasts. When Rails developers try out Ecto, they often feel there’s too much boilerplate and believe the Rails design is much more intuitive. This, I think, is one reason Phoenix struggles to attract Rails developers. If it can’t please Rails users, it will rarely appeal to others.
lawn
The Ash framework is a data abstraction layer you light want to check out, although I'm not familiar with Rails/ActiveRecord to tell if it's closer to what they're after.
tobyhinloopen
My primary concern with Phoenix is how bad it performs in LLMs and I basically went back to Node/React/Rails for LLMs.
This is very exciting and I’ll check it out!
josefrichter
What you mean by performs bad in LLMs?
tobyhinloopen
LLMs don’t know Elixir/Phoenix very well
josefrichter
Claude Code is pretty proficient in it. Others, not so much. True.
tobyhinloopen
Did you consider creating a LLM.txt and some variants for Phoenix?
Some libraries have text-based documentation for LLMs which works great in my experience.
johnwheeler
Why wouldn't LLMs work as well with Elixir as python or JS? LLMs don't parse an AST.
lawn
Because of the amount of Python and JS in the wild is much more than the amount of Elixir code, so the LLMs have much more data to base their answers on.
johnwheeler
Thanks
rramon
Very cool. Does it use Phoenix 7 or 8 in the video?
chrismccord
1.8rc, which is going 1.8.0 momentarily
arrowsmith
1.7 or 1.8, not 7 or 8
rramon
Thanks for correcting my typo.
sockboy
Integrating AI assistance directly into an Elixir IDE could boost productivity, especially for newcomers. Excited to see how remote SSH and local workflows develop!
artur_makly
For us SaaS Founders, GrowthHackers, Product folks, us non-devs.. What are some solid use-cases for it today?
mcdirty
Is the site down?
tough
this is the excuse i needed to revisit phoenix-elixir after a decade
ty fly team
asmodeuslucifer
Well of course I have to test it.
Disabled on news.ycombinator.com
citizenpaul
Heyy! Now we know why fly.io was shilling for AI a couple weeks ago.
"They are not making money off AI".was the most common response to my pointing out they were shilling AI. Feels good to be right.
lawn
Maybe you mean the "My AI Skeptic Friends Are All Nuts" blog post on fly.io?
Where they have this nugget about plagiarism:
> But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.
So I guess if you have concerns about them using any code you upload using this tool you can shove it up your ass.
pier25
I'm surprised they are investing into this. I checked Phoenix recently because I was interested in LiveView and there isn't even an official AWS SDK for Elixir.
Honestly doubt the AI stuff is going to move the needle much if you can't even have a dependable S3 client.
chrismccord
Meanwhile I am a happy user of :ex_aws or :req_s3 which has done everything I need it to do. Object ops, iam policies, etc. A dependable S3 client has been there for years. The elixir core team doesn't need to maintain it.
ReqS3 is one of my favorite things to use: https://hexdocs.pm/req_s3/readme.html
pier25
Maybe you can guarantee Phoenix will be maintained 5-10 years from now but that's really not the case for some random library on Github.
If you look around you'll see this kind of stuff is really one of the biggest blockers for Elxir and Phoenix. Especially for something as fundamental as cloud storage.
techpression
Considering the horror the official AWS CLI is this seems like a strange example. I’ve used both the non official libraries and they work fine. The one that is auto generated doesn’t feel very Elixir, but that’s to be expected.
pier25
> I’ve used both the non official libraries and they work fine
Maybe fine today but what about 5 years from now?
Can you say, with any degree of confidence, if these these libraries are going to be properly maintained in the future? No, you cannot.
techpression
Goes for every library out there though. Official or not.
pier25
Not really. You can be much more confident the official AWS SDK will be available 5 or even 10 years from now.
olafura
What are you talking about, there has been a AWS client forever and I've never had a problem. It's not something you really need an official sdk for they are anyway often just reference because you might want different performance characteristics.
I've usually not seen more than 3 or so official SDK for most services and there are a lot more programming languages than that. For example Microsoft's Graph API doesn't have an official Ruby client, they have one that sort of works.
Snakes3727
Neither of them are official which is often a non starter for some large enterprise customers.
pier25
Not only large enterprise customers. Anyone who's thinking mid or long term.
quaunaut
The main lib everyone uses, :ex_aws, has been actively maintained for literally over a decade[1]. Official or not, it's used by literally the entire community, since even non-AWS services often will support its API.
I still don't understand this. If you are big enough then you get Amazon to make an official sdk, if you aren't then what exactly are you looking for?
The official aws cli used to talk to the soap interface and used regex instead of actually doing correct error handling and that was used by so many tools. Even though it used to break horrible.
It's quite a niche you are talking about, not big enough to debug open source code but still big enough to require SLA for SDK and not being able to talk Amazon into creating it. It's generated code, it's not rocket science.
What I have experienced is that software licence, where you are sending data to, where you are hosting it and having access to audit the code has usually been a bigger concern.
But then again big organisations often have really specific concerns. So I'm not doubting your statement it's just that I have never heard it before.
I saw this and thought, if this doesn't get me to give it a go, nothing will.
Less than 45 minutes after signing up for fly.io, I have a multi-room tic tac toe game deployed.
https://tic-tac-toe-cyber.fly.dev/
I had it build the game, opting for a single room at first to see if that worked. Then I had it add multiple rooms on a different git branch in case that didn't work. It worked great.
I learned very little about elixir, phoenix, or deploying to fly.io up to this point, and I already have a nice looking app deployed and running.
I know a lot of devs will hate that this is possible, and it is up to me now to look at the steps it took to create this and really understand what is happening, which are broken down extremely simply for me...
I will do this because I want to learn. I bet a lot of people won't bother to do that. But those people never would have had apps in the first place and now they can. If they are creating fun experiences and not banking apps, I think that is still great.
You guys have been releasing amazing things for years only to be poorly replicated in other languages years later.. but you really outdid yourselves here.
I'm blown away.
edit: is there a way to see how much of my credits were used by building this?
it give me a selection of "styles" and I chose neon retro.. I probably could have been more creative and typed in my own suggestion.
Other than that, I said absolutely nothing about how I wanted the layout.
It came up with the idea of listing all active games on the homepage, with the number of players in each, all on its own.
I went from "I want a two player tic tac toe game" to having one, and then added multiple rooms, and deployed it all in under 45 minutes, with little input other than that..
I've seen others say they went through the full $20 within 45minutes to an hour.
They are supposed to be adding a way to monitor usage soon.
Just a clarifying question since I'm confused by the branding use of "Phoenix.new" (since I associate "Phoenix" as a web framework for Elixir apps but this seems to be a lot more than that).
- Is "Phoenix.new" an IDE?
- Is "Phoenix.new" ... AI to help you create an app using the Phoenix web framework for Elixir?
- Does "Phoenix.new" require the app to be hosted/deployed on Fly.io? If that's the case, maybe a naming like "phoenix.flyio.new" would be better and extensible for any type of service Fly.io helps in deployment - Phoenix/Elixir being one)
- Is it all 3 above?
And how does this compare to Tidewave.ai (created as presumably you know, by Elixir creator)
Apologies if I'm possibility conflating topics here.
You could absolutely treat phoenix.new as your full dev IDE environment, but I think about it less an IDE, and more a remote runtime where agents get work done that you pop into as needed. Or another way to think about it, the agent doesn't care or need the vscode IDE or xterm. They are purely conveniences for us meaty humans.
For me, something like this is the future of programming. Agents fiddling away and we pop in to see what's going on or work on things they aren't well suited for.
Tidewave is focused on improving your local dev experience while we sit on the infra/remote agent/codex/devin/jules side of the fence. Tidewave also has a MCP server which Phoenix.new could integrate with that runs inside your app itself.
Honestly, this is depressing. Pop in from what? Our factory jobs?
Oh, you sweet summer child. ;)
You will pop in from the other 9 projects you are currently popping in on, of course! While running 10 agents at once!
Is it possible to get that headless Chrome browser + agent working locally? With something like Cursor?
- ability to run locally somehow. I have my own IDE, tools etc. Browser IDEs are definitely not something I use willingly.
- ability to get all code, and deploy it myself, anywhere
---
Edit: forgot to add. I like that every video in Elixir/Phoenix space is the spiritual successor to "15-minute rails blog" from 20 year ago. No marketing bullshit, just people actually using the stuff they build.
You could also have it use GitHub and do PRs for a codex/devin style workflows. Running phoenix.new itself locally isn't something we're planning, but opening the runtime for SSH access is high on our list. Then you could do remote ssh access with local vscode or whatever.
So no plans to open the source code?
These other details that are not “just coding” are always the biggest actual impediments to “showing your work”. Thanks for making this!! Somehow I am only just discovering it (toddler kid robbing my “learning tech by osmosis” time… a phenomenon I believe you are also currently familiar with, lol)
The LLM chat taps out but I can't find a remaining balance on the fly.io dashboard to gauge how I'm using it. I _can_ see a total value of purchased top ups, but I'm not clear how much credit was included in the subscription.
It's very addictive (because it is awesome!) but I've topped up a couple of times now on a small project. The amount of work I can get out the agent per top-up does seem to be diminishing quite quickly, presumably as the context size increases.
PS: Why can't I get IEx to have working command-line history and editing? ;-P
2. How do you handle 3rd party libraries? Can the agent access library docs somehow? Considering that Elixir is less popular than more mainstream languages, and hence has less training data available, this seems like an important problem to solve.
But either way I hear you, thanks so much for taking the time to set me straight. It seems like either way you have done some visionary things here and you should be content with your good work! This stuff does not work for me for just circumstantial reasons (too poor), but still always very curious about the stuff coming out!
Again, so sorry. Congrats on the release and hope your day is good.
I was curious what the pricing for this is? Is it normal fly pricing for an instance, and is there any AI cost or environment cost?
And can it do multiple projects on different domains?
web https://example.com/file.pdf Error: page.goto: net::ERR_ABORTED at https://example.com/file.pdf Call log: - navigating to "https://example.com/file.odf", waiting until "load" at main (/usr/local/lib/web2md/web2md.js:313:18) { name: 'Error' }
/workspace#
How do you protect the host Elixir app from the agent shell, runtime, etc
quick followup if the agent's running on a separate machine and interacting remotely, how are failure modes handled across the boundary? like if the agent crashes mid-operation or sends a malformed command, does the remote runtime treat it as an external actor or is there a strategy linking both ends for fault recovery or rollback? just trying to understand where the fault tolerance guarantees begin and end across that split.
1. Remote agent - it's a containerized environment where the agent can run loose and do whatever - it doesn't need approval for user tasks because it's in an isolated environment (though it could still accidentally do destructive actions like edit git history). I think this alone is a separate service that needs to be productionized. When I run claude code in my terminal, automatically spin up the agent in an isolated environment (locally or remotely) and have it go wild. Easy to run things in parallel
2. Deep integration with fly. Everyone will be trying to embed AI deep into their product. Instead of having to talk to chatgpt and copy paste output, I should be able to directly interact with whatever product I'm using and interact with my data in the product using tools. In this case, it's deploying my web app
https://hub.docker.com/r/linuxserver/kasm
https://www.reddit.com/r/kasmweb/comments/1l7k2o8/workaround...
It does not handle any infrastructure, so no hosting. It allows me to set multiple small tasks, come back and check, confirm and move forward to see a new branch on GitHub. I open a PR, do my checks (locally if I need to) and merge.
How is this innovation?
It’s in fact one of my predictors for if they are going to be enthusiastic about agents or not.
And you wouldn’t think containerization would be a big leap but this stuff is so new and moving so fast that combining them with existing tech can surprise people.
I worked all day on a Phoenix app we’re developing for ag irrigation analysis. Of late, my “let’s see what $20/mo gets you” is Zed with its genetic offerings.
It actually writes very little Elixir code for me. Sometimes, I let it have a go, but mostly I end up rewriting that stuff. Elixir is fun, and using the programming model as intended is enlightening.
What I do direct it to write a lot is a huge amount of the HEEX stuff for me. With an eventual pass over and clean it up for me. I have not memorized all of the nuances of CSS and html. I do not want to. And writing it has got to be the worst syntactic experience in the history of programming. It’s like someone said people lisp was cool; rather than just gobs of nested parentheses, let’s double, nay triple, no quadruple down on that. We’ll bracket all our statements/elements with a PAIR of hard to type characters, and for funsies, we’ll make them out different words in there. And then when it came to years of how to express lists of things, it’s like someone said “gimme a little bit of ini, case insensitivity, etc”. And every year, we’ll publish new spec of new stuff that preserves the old while adding the new. I digress…
I view agentic coding as an indictment on how bad programming has gotten. I’m not saying there wouldn’t be value, but a huge amount of the appeal, is that web tech is like legalese filled with what are probably hidden bugs that are swallowed by browsers in a variety of u predictable ways. What a surprise that we’ve given up and decided the key tools do the probabilistic right thing. It’s not like we had a better chance of being any more correctly precise on our own anyway.
As an Elixir enthusiast I've been worried that Elixir would fall behind because the LLMs don't write it as well as they write bigger languages like Python/JS. So I'm really glad to see such active effort to rectify this problem.
We're in safe hands.
Its a negative point for engineering leaders that are the decision makers on tech stacks as it relates to staffing needs. LLMs not writing it well, developers that know it typically needing higher compensation, a DIY approach to libraries when there aren't any or they were abandoned and haven't kept pace with deprecations/changes, etc.
In the problem space of needing a web framework to build a SaaS, to an engineering leader there are a lot of other better choices on tech stack that work organizationally better (i.e. not comparing tech itself or benchmarks, comparing staffing, ecosystem, etc.) to solve web SaaS business problems.
I don't know where I stand personally since I'm not at the decision maker level, just thought I'd point out the non-programmer thought process I've heard.
From time to time it tries to do something a little old-school, but nothing significant really. It's very capable at spitting out entire new features, even in liveview.
Over all the experience has very productive, and at least on-par with my recent work on similar sized python and nextjs applications.
I think because I'm using mostly common and well understood packages it has a big leg up. I also made sure to initialise the phoenix project myself to start with so it didn't try to go off on some weird direction.
Before this I did a small project and I hit the 50 free tier limit through Zed by the time I was about 90% done. It was a small file drop app where internal users could create upload links, share them with people who could use them to upload a file. The internal user could then download that file. So it was very basic, but it churned out a reasonable UI and all the S3 compatible integration, etc.
I had to intervene a bit and obviously was reviewing everything and tweaking where needed. But I was surprised at how far I got on the 50 free prompts.
It's hard to know what you really get for that prompt limit though as I probably had a much higher number of actual prompts than they were registering. It's obviously using some token calculation under the hood and it's not clear what that is. All in all I probably had about 60-70 actual prompts I ran through it.
My gut says 500/mo would feel limited if I was going full "vibe" and having the LLM do basically everything for me every day. That said, this is the first LLM product I'm considering personally paying for. The integration with Zed is what wins for me over Claude, where you'd have to pay for API credits or use Claude Code. The way they highlight code changes and stuff really is nice.
Bit of a brain dump, sorry about that!
For Claude Code, the limit is reset every 5 hours so if you hit it, you rest a bit. Not that big a deal to me. But the way it works I find much more stressful. It is reviewing just about everything it is doing. It is step-by-step. Some of it you can just say Yes, do it without permission, but it likes to run shell commands and for obvious reasons arbitrary shell commands need your explicit Yes for each run. This is probably a great flow if you want a lot of control in what it is doing. And the ability to intercede and redirect it is great. But if you want more of a "I just want to get the result and minimize my time and effort" then Zed is probably better for that.
I am also experimenting with OpenAI's codex which is yet a different experience. There it runs on repos and pull requests. I have no idea what their rate/limit stuff will be. I have just started working with it.
Of the three, disregarding cost, I like Zed's experience the best. I also think they are the most transparent. Just make sure never to use the burn mode. That really burns through the credits very quickly for no real discernible reason. But I think it is also limited to either small codebases or prompts that limit what the agent is going through to get up to speed due to the context window being about 120k (it is not 200k as the view seems to suggest).
If Phoenix.new helps solve that problem, I’m all for the effort. But otherwise, the sole focus of the community leaders of Elixir should be squarely and exactly focused on creating the incentives and dynamics to grow the base.
Compare, for example, Mastra in TypeScript or PydanticAI in Python. Elixir? Nothing.
Not here to bash. It’s more just a disappointment because otherwise I think nothing comes close.
Want a first-party client library for the service you're using? Typically the answer is "too bad, Elixir developer." And writing your own Finch or Req wrapper for their REST endpoint simply isn't a valid answer.
>For its size, Elixir is doing quite well.
I'm actually arguing the opposite. Elixir is not doing well because of its size. So how can that be influenced and changed?
Some languages—Clojure is a good example—have packages from 10 years ago, entirely unmaintained, that still work great because no maintenance is needed.
You think just because an author bumps the version number of a library it's somehow better than a library that is considered complete?
It boggles my mind that people actually think this way.
If you want to take your website and business down, use ChatGPT-4o's code
https://www.youtube.com/live/V2b6QCPgFTk
I mean seriously, fuck everything about how the data is gathered for these things, and everything that your comment implies about them.
The models cannot infer.
The upside of my salty attitude is that hordes of vibe coders are actively doing what I just suggested -- unknowingly.
I am not sure, but the cat is out of the box. I don't think we can do anything at this point.
My experience with software development is maybe different than yours. There's a massive amount of not-yet-built software that can improve peoples' lives, even in teeny tiny ways. Like 99.999% of what should exist, doesn't.
Building things faster with LLMs makes me more capable. It (so far) has not taken work away from the people I work with. It has made them more capable. We can all build better tools, and faster than we did 12 months ago.
Automation is disruptive to peoples' lives. I get that. It decreases the value of some hard earned skills. Developer automation, in my life at least, has also increased the value of other peoples' skills. I don't believe it's anti worker to build more tools for builders.
We agree on this completely, however you and I know there are plenty of people without jobs in the world who could be employed to do this work. You are spending your finite amount of time on earth working with services that are trying to squeeze the job market (they've said this openly) rather than spending it increasing the welfare of workers by giving them work.
> Automation is disruptive to peoples' lives.
You know the difference between automation and the goals of these companies. You know that they don't want to make looms that increase the productivity of workers, they want to replace the worker so they never have to pay wages again.
Saying the quiet part loud here.
It's really a matter of positive sum/growth mindset vs scarcity/status quo mindset.
It's more than evident that software has automated away all kinds of wage labor from the aforementioned typist pools to Hollywood special effects model-makers.
What's different now is that it is actually the software creators’ labor that is in danger of automation (I think this is easily overstated but it is obviously true to some degree).
I get that it feels different for us now that OUR ox is the one being gored. And I do think there will be no end of negative externalities from the turn towards AI. But none of that refutes the above respondent's point?
1. Typists are still around and so are special effects model-makers. 2. People who program aren't in danger of automation. 3. These services are entirely unsustainable, they will absolutely not last at their current pace.
The premise of this entire work, detailed by the creator, is to utilize a program to reduce the amount of work a programmer is required to do. They believe ultimately, like most results of improved automation, that this will result in more things we can work on because we have more time. I agree that this would likely be the case! We could also simply make more programmers, could we not? Why haven't we? Do the 18k people homeless in my city tonight not deserve a shot at learning a skill before we even think about making the work easier per person?
Finally, and more to the point, genAI is built by and designed to eliminate workers entirely. The money that goes into those services funds billionaires who seek to completely and totally annihilate the concept of the proletariat. When I make a tool that helps workers at my job do their job better I am not looking to eliminate that person from the company.
The days are numbered where humans are sitting typing out code themselves.
It's akin to the numbered days of type writer secretaries of the 20th century.
I'm sure your poor understanding of the history of improved tooling, like "type writer secretaries", will be a soft comfort in the future.
Overall I think we would all be happier if efficient machines take away the drudgery of our daily work and allow us to focus on things that really matter to us. . . as long as our basic needs are met.
Nope, I've been doing it for 16 years.
I have a question about how you manage context, and what model you use. Gemini seems the best at working with give context windows right now, but even that has its limitations. Thinking about working with Claude Code, a fair bit of my strategizing is in breaking down work and managing project state to keep context size manageable.
I'm watching the linked video and it's amazing seeing it in action, but I'm imagining continuing to work on a project and wondering if it will start losing its way so to speak. Can you have it summarize stuff, and can you start a session clean with those summaries, and have it "forget" files it won't need to use for this next feature, etc?
I've been daydreaming of an agentic framework that maximally exploits BEAM. This isn't that, but maybe jido[0] is what I'm looking for.
0. https://github.com/agentjido/jido
https://elixirforum.com/t/is-anyone-working-on-ai-agents-in-...
Which LLMs do you use that you find are best with Elixir/Phoenix?
I guess I should also note that I haven't really used LiveView much.
hard to put confidence in AI vibe hacks when the basic stuff just doesn't work.
* (Mix) The database for Myapp.Repo couldn't be created: killed
Few issues:
1. The 150 message limit is understable but it suddenly pops up and you lose significant work. I was working on UI mockup and just as I had finished and was ready to go on implementation, this limit appeared and significant part of my work was lost. 2. After the first credit, the credit seems to exhaust pretty fast which makes it expensive, especially when you are trying it out. 3. Also I don't understand when you ask it to prototype different screens, why does it overwrites the same file. 4. It is not able to stop to seek user feedback but keeps trying different approach which kind of exhausts the credit. It would be nice if it describes its approach, so the human developer can provide their feedback. 5. It seems it is using OpenAI because it is often self-congratulatory to the point of being annoying sometimes.
This is a tangential comment and should not detract from what Chris and team have created. I think closing the loop between agent and the running output is a great/critical step forward.
However, I find using AI to build transitional Apps with a UI is a bit like improving the way automobile steering wheels are made. In a world that soon won't need steering wheels at all.
If the AI is so good to write the code for an App, how much longer before you won't need those Apps in the first place? And then the question is, what will fill the role that Apps play today.
A coded app is significantly more efficient to execute, and more predictable, than dealing with AI in most situations.
Feels like we're getting into a weird situation if LLM providers are publishing open source agentic coding tools and OSS web app frameworks are publishing closed source/non-BYOK agentic coding tool. I realize this may not be an official "Phoenix" project but it seems analogous to DHH releasing a closed-source/hosted "Rails.new" service.
There’s a “clone Git repo” thing in the left side bar, use that to clone the project locally, mix deps.get, mix phx.serve and you’re up. You can deploy this anywhere you want.
I find Claude to have quite a bit of problems trying to navigate changesets + forms + streams in my codebase, just wondered if you had any tips of making it understand better :)
And thinking about it made me realize that soon there will be a completely different programming language used solely by coding agents. ChatGPT gives an interesting take on this, "The fundamental shift is that such a language wouldn’t be written or read, but reasoned about and generated. It would be more like an interlingua between symbolic goals and executable semantics, verbose, unambiguous, self-modifying, auto-verifiable, evolving alongside the agents that use it").
I've been working with Phoenix a lot the last few months, and I like it a lot. But I do get the sense that the project suffers from wanting to perpetually chase the next new thing, even when that comes at the expense of the functional elegance and conceptual cohesiveness that I think is Phoenix' main strength.
LiveView is a great example. It's a very neat bit of tech, but it's shoe-horned surprisingly awkwardly into Phoenix. There's now a live view and non-live view way to do almost everything in Phoenix, and each has their own different foibles and edge cases. A lot of code needs to work with both (e.g. auth needs to happen at both levels, basically), meaning a surprising amount of code needs to have two, nearly identical variants: one with traditional Plug idioms, and then another using LiveView equivalents. Quick little view helpers end up with either convoluted 'what mode am I in?' branching, or (more likely) in view-mode-dependent wrappers around view-mode-independent abstractions. This touches even the simplest helpers (what is the current path?) and becomes more cumbersome from there. (And given the lack of static analysis for views, it can be non-trivial to even find out what is and isn't actually working where.)
Not every website should be a live view (e.g. hiking directions, for example), but that is clearly the direction of travel in Phoenix. Non-live views get the disparaging moniker 'dead views', and the old Phoenix.HTML helpers have been depreciated in favour of <.form />-style live components. The generators depend on those, plus Tailwind, Hero Icons and (soon) DaisyUI, all fetched live from various places on the Internet on build. This tight coupling to trendy dependencies will age poorly, and it makes for bumpy on-boarding (opinionated and tightly coupled isn't necessarily a smoother experience, just a more inflexible one).
So with all of that in mind, while I'm not shocked to see Phoenix jump on the vibe coding hype train, I guess I am disappointed.
The revelation that AI is now writing PRs for Phoenix itself is not confidence inspiring. I rely on frameworks like Phoenix because I don't want to have to think about the core abstractions around getting a website to my users; I want to focus on my business logic. But implicit in that choice is the assumption that someone is thinking about those things. If it's AI pushing out Phoenix updates now, my trust level and willingness to rely on this code drops dramatically. I also do not expect Phoenix' fraying conceptual cohesiveness to get any better if that's the way we're headed.
Phoenix is still an amazing piece of tech, but I wish I felt more at ease about its future trajectory.
Having a full stack that is easy to use as a learning sandbox is incredibly helpful in that regard, so this looks amazing.
I couldn't get Tidewave working but I must try again to see if Tidewave with Claude Code would offer this level of awesome.
ps. @fly - please let me buy more credit, I just get an error!
We want things we can tinker and toy with from the inside.
Ya I really am not the target demographic for this since I don't use AI agents in my IDE anyway.
It does seem to perfectly fit fly.io in the sense that I also don't care about "edge" apps.
Does anyone know any great resources to learn how to design agents? Tool agnostic resources would be awesome.
Elixir is particularly well suited here. In Elixir this is a genserver doing http posts and reacting to the token stream. The LiveView chat gets messages from the genserver agent regardless of where it is on the planet, and the agent also communicates with the phoenix channel websocket talking to the IDE machines with regular messages, again anywhere they are on the planet.
I talk about this quite a bit in my ElixirConfEU talk and distill things down: https://youtu.be/ojL_VHc4gLk?si=MzQmz-vofWxWDrmo&t=1040
It's like helping LLMs use a computer; like building an interface for it.
Ok, this is enough to get me started.
How did you get VS Code embedded in your app? I'm aware of projects like Monaco, and that vscode.dev exists - so it's clearly possible - but I didn't realize it was something others could build upon?
Again, kudos!
(I almost just asked you on company Slack but figured the answer would be more broadly interesting.)
What usage limits do we get with the $20 monthly price?
Thank you!
In my vim workflow I keep splitting/unsplitting windows and I like to have a file browser I can navigate with vim bindings.
This is very exciting and I’ll check it out!
Some libraries have text-based documentation for LLMs which works great in my experience.
ty fly team
Disabled on news.ycombinator.com
"They are not making money off AI".was the most common response to my pointing out they were shilling AI. Feels good to be right.
Where they have this nugget about plagiarism:
> But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.
So I guess if you have concerns about them using any code you upload using this tool you can shove it up your ass.
Honestly doubt the AI stuff is going to move the needle much if you can't even have a dependable S3 client.
If you look around you'll see this kind of stuff is really one of the biggest blockers for Elxir and Phoenix. Especially for something as fundamental as cloud storage.
Maybe fine today but what about 5 years from now?
Can you say, with any degree of confidence, if these these libraries are going to be properly maintained in the future? No, you cannot.
https://hex.pm/packages/ex_aws https://hex.pm/packages/ex_aws_s3
I've usually not seen more than 3 or so official SDK for most services and there are a lot more programming languages than that. For example Microsoft's Graph API doesn't have an official Ruby client, they have one that sort of works.
1. https://github.com/ex-aws/ex_aws/releases?page=2
The official aws cli used to talk to the soap interface and used regex instead of actually doing correct error handling and that was used by so many tools. Even though it used to break horrible.
It's quite a niche you are talking about, not big enough to debug open source code but still big enough to require SLA for SDK and not being able to talk Amazon into creating it. It's generated code, it's not rocket science.
What I have experienced is that software licence, where you are sending data to, where you are hosting it and having access to audit the code has usually been a bigger concern.
But then again big organisations often have really specific concerns. So I'm not doubting your statement it's just that I have never heard it before.