Preferences

brainless
Joined 2,222 karma
Hey, I am Sumit. I live in a little Himalayan village in India.

Software engineer for 16 years, across multiple startups (Python/Typescript/Rust), run Curry Hostel.

I am building nocodo, a multi-model, multi-agent framework. Agents for tasks like database, Excel, PDF analysis with specific tools for them. Orchestration on top. Deploy single binary, with logging, evals built in:

- https://github.com/brainless/nocodo

I have given up city life and hustle culture. I share my home as a co-living space, mainly for artists and digital nomads:

- https://www.instagram.com/curryhostel

Socials:

- https://meet.hn/city/in-Kolkata - https://linkedin.com/in/brainless


  1. I agree but that is not the issue. See the really "large" models are great at a few things but they are not needed for daily tasks, including most coding tasks. Claude Code itself uses Haiku for a lot of tasks.

    The non-SOTA companies will eat more of this pie and squeeze more value out of the SOTA companies.

  2. I do not know what that next level is to be honest. Web search, crawler, code execution, etc. can all be easily added on the agent side. And some of the small models are so good when the context is small that being locked into one provider makes no sense. I would rather build a heavy multi-agent solution, using Gemini, GLM, Sonnet, Haiku, GPT, and even use BERT, GlinER and other models for specific tasks. Low cost, no lock-in, still get high quality output.
  3. I know this will sound strange, but SOTA model companies will eventually allow subscription based usage through third-party tools. For any usage whatsoever.

    Models are pretty much democratized. I use Claude Code and opencode and I get more work done these days with GLM or Grok Code (using opencode). Z.ai (GLM) subscription is so worth it.

    Also, mixing models, small and large ones, is the way to go. Different models from different providers. This is not like cloud infra where you need to plan the infra use. Models are pretty much text in, text out (let's say for text only models). The minor differences in API are easy to work with.

  4.   Location: India
      Remote: Yes, can travel for sprints
      Willing to relocate: Can live in the EU for extended periods
      Technologies: Rust, Python, Typescript but focused only on agent enabled products, with multi-agent, multi-tool orchestration using supervisor or other patterns
      Résumé/CV: https://docs.google.com/document/d/1rxGkNPF8bT89sC7WZ1Fd3W6yWnLYFAIEItQVYituNuE/edit?usp=sharing
      Email: My name (no spaces) hosted with Google's email domain
    
    
    Hey, I am Sumit Datta. I have been working full time on my ideas around LLMs for about 2 years. My product nocodo, https://github.com/brainless/nocodo, is a culmination of my learning.

    It is a multi-model, multi-agent orchestration framework and I am building logging, evals, user and permission management on top. If you are building an agent based product, I am happy to have a talk. nocodo is GPL v3 licensed but I sell commercial license and support.

  5. I am always interested in seeing alternatives in the edge compute space but self hosting does not make sense to me.

    The benefit of edge is the availability close to customers. Unless I run many servers, it is simply easier to run one server instead of "edge".

  6. The course is broader and it is for people who are not from a technical background at all. Think about a course to introduce the idea of working with LLMs for day to day work. What are agents, then focusing on coding agents, and so on.
  7. I co-mentor with a large online school for an AI accelerator course. We get about 600 participants each month, paying about $500-600 for a 14 day course. I only co-mentor for 2 days - the days we teach fundamentals of software development and then show how to code with Replit, Bolt, Lovable, Emergent, etc.

    One of the most common questions is "can I build on xyz and shift to abc because I do not want to pay?" And another is "can I host the code myself?"

    Customers know they do not need to stay with any of these code builders. The platforms know it too. They spend tons of $ to get customers, who use the credits and then leave.

    Manus is running $5000 credit for 2000 people. A simple search shows so many offers: https://x.com/search?q=ManusAI%20credits&src=typed_query

    Each of the players are just eating each others customers and showing growth. Perplexity has acquired who knows how many customers in India through their 12 month free Pro offer via Airtel (a telecom provider): https://www.perplexity.ai/help-center/en/articles/11842322-p...

  8. Lovely project. Also @rubenvanwyk mentioned SlateDB. I am not sure if this will fit my use-case but, today, I was looking for data hosting options for a self-hosted LLM+bot for email/calendar.

    I have this product I have tried and stopped before: https://github.com/pixlie/dwata and I want to restart it. The idea is to create a knowledge graph (use Gliner for NER). Compute would either be on desktop or cloud (instances).

    Then store the data on S3 or Cloudflare Workers KV or AWS Dynamo DB and access with cloud functions to hook up to WhatsApp/Telegram bot. I may stick with Dynamo or Cloudflare options eventually though (both have cloud functions support).

    I need a persistent storage of key/value data (the graph, maybe embedding) for cloud functions. Completely self-hosted email/calendar bot with LLM, own cloud, own API keys. Super low running cost.

  9. I simply ask Claude Sonnet, using claudecode, to use opencode. That's it! Example:

      We need to clean up code lint and format errors across multiple files. Check which files are affected using cargo commands. Please use opencode, a coding agent that is installed. Use `opencode run <prompt>` to pass in a per-file prompt to opencode, wait for it to finish, check and ask again if needed, then move to next file. Do not work on files yourself.
  10. I do not spend $100/month. I spend for 1 Claude Pro subscription and then a (much cheaper) z.ai Coding Plan, which is like one fifth the cost.

    I use Claude for all my planning, create task documents and hand over to GLM 4.6. It has been my workhorse as a bootstrapped founder (building nocodo, think Lovable for AI agents).

  11. Companies are lazy. Big ones are lazy and scared to not have someone to hold accountable.
  12. I am using GPL 3.0 mostly from a business standpoint, so that I can sell a commercial license to companies that may want to modify the project. In really LLMs will launder everything they can, I am not sure if that can be stopped. So that affects every project. The reason why LLMs are as good is that they train on all code that is available.
  13. I use LLMs to generate almost all my code. Currently at 40K lines of Rust, backend and a desktop app. I am a senior engineer with almost all my tech career (16 years) in startups.

    Coding with agents has forced me to generate more tests than we do in most startups, think through more things than we get the time to do in most startups, create more granular tasks and maintain CI/CD (my pipelines are failing and I need to fix them urgently).

    These are all good things.

    I have started thinking through my patterns to generate unit tests. I was generating mostly integration or end to end tests before. I started using helping functions in API handlers and have unit tests for helpers, bypassing the API level arguments (so not API mocking or framework test to deal with). I started breaking tasks down into smaller units, so I can pass on to a cheaper model.

    There are a few patterns in my prompts but nothing that feels out of place. I do not use agents files and no MCPs. All sources here: https://github.com/brainless/nocodo (the product is itself going through a pivot so there is that).

  14. Halfway through I realized where this is going. Could not hold the tears. These are tough choices. My parents are alive, getting older. My dad has fairly serious mental health issues. Life has never been easy in a very dysfunctional family. I stayed away from family for many years. Now, I am 41 and these last few years, I have started to realize that I may not have much time with them.

    We are busy people but no matter how we try, we cannot bring people back. We cannot make some things different. I think about that a lot. Even coming from a family of abuse and trauma that needed a decade of counseling and healing, I still feel sad they may not be there much longer.

    Thank you for a reminder. Thank you for sharing your personal story.

  15. The skills approach is great for agents and LLMs but I feel agents have to become wider in the context they keep and more proactive in the orchestration.

    I have been running Claude Code with simple prompts (eg 1) to orchestrate opencode when I do large refactors. I have also tried generating orchestration scripts instead. Like, generate a list of tasks at a high level. Have a script go task by task, create a small task level prompt (use a good model) and pass on the task to agent (with cheaper model). Keeping context low and focused has many benefits. You can use cheaper models for simple, small and well-scoped tasks.

    This brings me to skills. In my product, nocodo, I am building a heavier agent which will keep track of a project, past prompts, skills needed and use the right agents for the job. Agents are basically a mix of system prompt and tools. All selected on the fly. User does not even have to generate/maintain skills docs. I can get them generated and maintained with high quality models from existing code in the project or tasks at hand.

    1 Example prompt I recently used: Please read GitHub issue #9. We have phases clearly marked. Analyze the work and codebase. Use opencode, which is a coding agent installed. Check `opencode --help` about how to run a prompt in non-interactive mode. Pass each phase to opencode, one phase at a time. Add extra context you think is needed to get the work done. Wait for opencode to finish, then review the work for the phase. Do not work on the files directly, use opencode

    My product, nocodo: https://github.com/brainless/nocodo

  16. I prefer using LLM. But many people will ask what is an LLM and then I use AI and they get it. Unfortunate.

    At the same time, LLMs are not a bullshit generator. They do not know the meaning of what they generate but the output is important to us. It is like saying a cooker knows the egg is being boiled. I care about the egg, cooker can do its job without knowing what an egg is. Still very valuable.

    Totally agree with the platform approach. More models should be available to be run own own hardware. At least 3rd party cloud provider hardware. But Chinese models have dominated this now.

    ChatGPT may not last long unless they figure out something, given the "code red" situation is already in their company.

  17. Very happy to see this since I am building in this domain. We need external and internal context though. I am aiming for always available context for current and related projects, reference projects, documentation, library usage, commands available (npm, python,...), tasks, past prompts, etc. all in one product. My product, nocodo (1), is built by coding agents, Claude Code (Sonnet only) and opencode (Grok Code Fast 1 and GLM 4.6).

    I just made a video (2) on how I prompt with Claude Code, ask for research from related projects, build context with multiple documents, then converge into a task document, shared that with another coding agent, opencode (with Grok or GLM) and then review with Claude Code.

    nocodo is itself a challenge for me: I do not write or review code line by line. I spend most of the time in this higher level context gathering, planning etc. All these techniques will be integrated and available inside nocodo. I do not use MCPs, and nocodo does not have MCPs.

    I do not think plugging into existing coding agents work, not how I am building. I think building full-stack is the way, from prompt to deployed software. Consumers will step away from anything other than planning. The coding agent will be more a planning tool. Everything else will slowly vanish.

    Cheers to more folks building here!

    1. https://github.com/brainless/nocodo 2. https://youtu.be/Hw4IIAvRTlY

  18. My experience has been the opposite. I came from Python and Typescript and the initial amount of reading and fighting with the compiler was very frustrating but I understood one thing that sets Rust apart - I write code with almost the same level of bugs as a seasoned Rust developer. That is a win for years to come as team grows and software gets old. I will bet on it again and again.

    Now I mostly generate code with coding agents and almost everything I create is Rust based - web backend to desktop app. Let the LLM/agent fight with the compiler.

  19. May I add Gliner to this? The original Python version and the Rust version. Fantastic (non LLM) models for entity extraction. There are many others.

    I really think using small models for a lot of smell tasks is the best way forward but it's not easy to orchestrate.

  20.   Location: Kolkata, India
      Remote: Yes and willing to travel frequently
      Willing to relocate: No
      Technologies: Agentic development, Go, Rust, TypeScript, etc. Strongly typed stack only. I do not write code by hand anymore, and I have my own agent. You can hire me to build for you or you can build on top and I consult for you.
      Résumé/CV: https://www.linkedin.com/in/brainess, https://brainless.in, https://github.com/brainless
      Email: My name, without spaces or any other characters, at Google's consumer email domain
    
    I am Sumit and I am building https://github.com/brainless/nocodo. I have built this only with coding agents. If you are building rapidly with LLMs, I am very interested to work with you. Use LLMs in every stage of your workflow. Strongly typed languages only. I have been an engineer for 16 years, founder many times and have deep knowledge and experience in early stage product building.

    Think of it as Devin AI but self-hosted, headless (use my UI or your), customizable, multi-model, multi-OS (build/debug your apps on Win, Mac, Linux). Enforce policies with hooks. Commercial license at $90K/year with support.

  21. Thanks for sharing.

    I did not know of this and I am looking for simple ways to isolate processes for multiple reasons. I am building a coding agent, https://github.com/brainless/nocodo, that runs (headless) on a Linux instance. Generated code is immediately available for demo.

    I am new to isolation and not looking for a container based approach. Isolation from a security standpoint but I do not know enough. This approach looks like a great start for me.

  22. This was recently on HN and I think it adds so much value to GPUI: https://github.com/longbridge/gpui-component/
  23. In my coding agent, nocodo (1), I am thinking about using copy on write filesystems for cheaper multi-agent operations. But to be honest git worktree may be good enough for most use cases. nocodo checks existing worktree in the local repo and I will add creation and merge support too.

    1. https://github.com/brainless/nocodo

  24. This is interesting to read and very important to me since I am building a coding agent with team collaboration in mind. I used to use Zed daily till the point that I moved away from writing code directly and instead generate all my projects only from prompts.

    I think collaboration for people who eventually use the software will be more critical in the era of agentic coding. Project Management will change. We are not waiting for 2 weeks to build prototypes, it gets done in a hour. What does that mean for end users - do they prompt their changes and get access to new software? Who would double-check, would AI reviews be good enough, would AI agents collaborate along with humans in the loop?

    There are so many questions not answered. If anyone is keen on having these talks, I would happy to share what I think. Here is what I am building: https://github.com/brainless/nocodo

    I want to see a future where end users can prompt their needs, have collaborators in the company to help clear things up and in an hour the feature/issue is tackled and deployed.

  25. That is a valid question.

    But that would apply to any app that deals with files like this one does.

    This one is open source and we can run some code analysis on it, compile locally, etc. I am not well versed in security checks but I guess you get the idea.

  26. The CLAUDE.md file is right there, so they are probably using agentic coding.

    But why does it matter? Does the app not work? I don't have a Mac, can't check.

  27. I am building a coding agent for small businesses. The agent runs on Linux box on own cloud. Desktop and mobile apps to chat with AI models and generate software as needed.

    SSH based access with HTTP port forward. Team collaboration, multiple models, git based workflow, test deployment automation, etc.

    Very early stage but it now work on its own source code (Bash tool is missing): https://github.com/brainless/nocodo

  28. This is a good point. Should we ask why so many people are still going to ChatGPT? Do the existing systems get so many users interacting with them?
  29. "We are not ourselves when we are fallen down" - hits hard. I really hope this is a calling for folks who will care.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal