Preferences

keriati1
Joined 65 karma
Tech Lead, EU, Finance

  1. I’m not sure why, but the first thing I did was check if HTTP status code 418 was listed.
  2. I can also recommend rather to use the stacked PR approach. We have it since years, PR review "issues" are not a thing for us.

    I still encourage do to a lot of small commits with good commit messages, but don't submit more then 2-3 or 4 commits in a single PR...

  3. What model size is used here? How much memory does the GPU have?
  4. We had some as build agent around already. We don't plan to do any fine tuning or training, so we did not explore this at all. However I don't think it is a viable option.
  5. We run coding assistance models on MacBook Pros locally, so here is my experience: On hardware side I recommend Apple M1 / M2 / M3 with at least 400Gb/s memory bandwidth. For local coding assistance this is perfect for 7B or 33B models.

    We also run a Mac Studio with a bigger model (70b), M2 ultra and 192GB ram, as a chat server. It's pretty fast. Here we use Open WebUI as interface.

    Software wise Ollama is OK as most IDE plugins can work with it now. I personally don't like the go code they have. Also some key features are missing from it that I would need and those are just never getting done, even as multiple people submitted PRs for some.

    LM Studio is better overall, both as server or as chat interface.

    I can also recommend CodeGPT plugin for JetBrains products and Continue plugin for VSCode.

    As a chat server UI as I mentioned Open WebUI works great, I use it with together ai too as backend.

  6. I think it is even easier right now for companies to self host an inference server with basic rag support:

    - get a Mac Mini or Mac Studio - just run ollama serve, - run ollama web-ui in docker - add some coding assitant model from ollamahub with the web-ui - upload your documents in the web-ui

    No code needed, you have your self hosted LLM with basic RAG giving you answers with your documents in context. For us the deepseek coder 33b model is fast enough on a Mac Studio with 64gb ram and can give pretty good suggestions based on our internal coding documentation.

  7. No, we went with RAG pipeline approach as we assume things change too fast.
  8. We actually run already in-house ollama server prototype for coding assistance with deepseek coder and it is pretty good. Now if we would get a model for this, that is on chatgpt 4 level, I would be super happy.
  9. For projects where the estimated rewrite duration exceeds three months, we have employed an iterative approach to refactoring for several years. This methodology has yielded pretty good results.

    We also utilize a series of Bash scripts designed to monitor the refactoring process. These scripts collect data regarding the utilization of both the old and new "state" within the codebase. The collected data is then dumped in Grafana, providing us with a clear overview of our progress.

  10. I find it awesome. Maybe it is targeted to my age group. Sadly I have an iPhone 13 and won't upgrade in the next 2-3 years. Otherwise I would order it right now.
  11. +1 for the bucket queue. I learned about that trick a few weeks ago and in my use cases it cut the time to run A* by around 60-70%.
  12. I just played with it around and wanted to compare what an article about this event would look like.

    But maybe it was a bad idea to post it, if we want to stay on topic.

  13. I also got a few messages from friends about this watch, but I don't think it will replace the "serious" diving computers for now, the missing dive pod support is already killing it.

    Testing it next to a Shearwater (or Ratio, Suunto EON) would be interesting to see the differences in the algorithm. We will probably see a bunch of youtube reviews doing this comparisons.

    With what I know so far from the watch, I would really not recommend using it as the only diving computer.

    Also the "big buttons with gloves in mind" are kinda funny if I think about diving dry gloves ;)

  14. Existing transmitters would not use Bluetooth but some custom radio signal on for example 123 kHz, i don't think the watch can suppor it
  15. Pretty sure this is happening already. I started to get some messages on linkedin that include some details from my descriptions, but it feels super unnatural, like a template was filled with some data mined content...
  16. Visualising a Codebase: This sounds very interesting, it looks like similar graphics as what CodeScene creates.

    The dependency between the modules seems like a nice addition to me. I don't think CodeScene has that one. Can't wait to try this on our bigger projects.

    I never found a really good way to visualize large codebases and the dependencies between the modules, does somebody have something for this?

  17. It's good to see Firefox on top of this list.

    Is Protonmail usually accessed through web clients? I only use native iOS client and MacOS Mail with the Bridge app. Somehow I would expect Protonmail users to not use the web client so much, maybe I am wrong.

  18. Any big old enterprise will provide a lot of IT jobs where you can basically sleep all day long.
  19. How can 1 000 000 German tourists drive to croatia in the summer for holidays if all have electric cars? Right now in high season the normal gas stations have long lines of cars waiting...
  20. I think the tutorials are not so useful in the game, however they gave me some basics. I spent some hours on YouTube and from zero flying knowlege, I can say I am now familiar with the Cessna 152 Takof, landing, navigation and also with the 172 with G1000 (including ILS). Wish all this introductions would be in the game.

    However the overall experience is still very Beta. I got a lot of freezes, bugs on map (like a big bump on some runways that flip the airplane and game over) sometimes the nav get unresponsive (partial freeze of the game)

    Even worse is that after you buy the game on steam, any in-app purchase is done on steam but download would be from microsoft and that just doesn’t work for a bunch of people.

    At least this free models are just working when people copy them in place, while payed content is not even showing up at the moment.

  21. Yep, seems like apple made another move against the web.
  22. I learned using the TICK stack and Grafana.

    Originally I wanted to see some Covid data with my own visualization, was thinking on D3 first.

    Ended up with full TICK stack and Grafana, monitoring all devices I have at home and setting up alerts for all kind of silly stuff. Usefulness is questionable, but I learned a lot.

    I have now some insight in the local area covid spread and happy to report, no new cases in my town discovered since a month \o/ (according to the government provided API)

  23. It won't be just for 30 days. How can people complain about Netflix quality now? I will be grateful if we have anything still working in a few months...
  24. How much data traffic could google save if it would just bundle all fonts with chrome?
  25. Searching for opening times of local shops is what keeps me on Google for now... Very basic but very common search for me...
  26. We do standups, however the questions are more like

    What task moved yesterday? (handover?) What task is going to move today?

    and the most important one: What is blocked? (Who can help to unblock?)

    I can recommend this format for everybody.

  27. We implemented Vim like shortcuts for github enterprise code review screen and it works pretty nice, no mouse required. However I like the dependency graph, maybe we implement it too...
  28. This is a very good explanation of the correct answer.
  29. Is it now TypeScript or JavaScript? It's a huge difference...

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal