Preferences

bluefirebrand parent
It would be absolutely asinine to bombard your teammates like this and it would be a massive sign that you're not cut out for the work if you had to

handfuloflight
I wasn't suggesting we literally bombard teammates.

The whole point is that with LLMs, you can explore ideas as deeply as you want without tiring them out or burning social capital. You're conflating this with poor judgment about what to ask humans and when.

Taking 'bombard' literally is itself pretty asinine when the real point is about using AI to get thoroughly informed before human collaboration.

And if using AI to explore questions deeply is a sign you're 'not cut out for the work,' then you're essentially requiring omniscience, because no one knows everything about every domain, especially as they constantly evolve.

bluefirebrand OP
My point was that

> You really haven't, since they can't just generate tokens at the rate and consistency of an LLM

Is wrong. It's not because they can't generate tokens at the rate and consistency of an LLM

It's because trying to offload your work onto your coworkers this way would make you a huge jerk

handfuloflight
Exactly. You're literally making my point while trying to argue against me.

Whether it's because humans can't handle the pace or because it would make you a jerk to try: either way, you just agreed that humans can't/shouldn't handle unlimited questioning. That's precisely why LLMs are valuable for deep exploratory thinking, so when we engage teammates, we're bringing higher-quality, focused questions instead of raw exploration.

And you're also missing that even IF someone were patient enough to take every question you brought them, they still couldn't keep up with the pace and consistency of an LLM. My original point was about what teammates are 'willing to take', which naturally includes both courtesy limits AND capability limits.

bluefirebrand OP
> That's precisely why LLMs are valuable for deep exploratory thinking, so when we engage teammates, we're bringing higher-quality, focused questions instead of raw exploration.

This isn't really new though. We used to use search engines and language docs and stack overflow for this

Before that people used mailing lists and reference texts

LLMs don't really get me to answers faster than Google did with SO previously imo

And it still relies on some human having asked and answered the question before, so the LLM could be trained on it

handfuloflight
No offense, if this is your response, then I don't think you have any experience to qualify you to have this discussion, you clearly do not have basic experience in using LLMs.

To make my point, let me know when Stack Overflow has a post specifically about the nuances of your private codebase.

Or when Google can help you reason through why your specific API design choices might conflict with a new feature you're considering. Or when a mailing list can walk through the implications of refactoring your particular data model given your team's constraints and timeline.

LLMs aren't just faster search: they're interactive reasoning partners that can engage with your specific context, constraints, and mental models. They can help you think through problems that have never been asked before because they're unique to your situation. That's the 'deep exploratory thinking' I'm talking about.

The fact that you're comparing this to Stack Overflow tells me you're thinking about LLMs as glorified search engines rather than reasoning tools. Which explains why you think teammates can provide the same value: because you're not actually using the technology for what it's uniquely good at.

This item has no comments currently.