Preferences

cloverich parent
Please y'all, when you list supportive or critical complaints based on your actual work, include some specifics of the task and prompt. Like actual prompt, actual bugs, actual feature, etc. I've had great success with both ChatGPT and Claude for years, am around 3x sustained output increase in my professional work, and kicking off and finishing new side projects / features that I used to simply not ever finish. BUT there's some tasks I run into where it's god awful. Because I have enough good experience, I know how to work around, when to give up, when to move on, etc. I am still surprised at things it cannot do, for example Claude code could not seem to stitch together three screens in an iOS app using the latest SwiftUI (I am not an iOS dev). IMHO for people using it off and on or sparingly, it's going to seem either incredible or worthless depending on your project and prompt. Share details, it's so helpful for meaningful conversation!

Mathiciann
I am almost convinced your comment is parody but I am not entirely sure.

You want proof for critical/supportive criticism? Then almost in the same sentence you make an insane claim without backing this up by any evidence.

cloverich OP
> You want proof for critical/supportive criticism? Then almost in the same sentence you make an insane claim without backing this up by any evidence.

Nearly every critical reply to my comment bases that criticism on the lack of examples and details I included for my claim which is the very thing I am suggesting we do (i.e. they are, ironically, agreeing with me?). I am sorry I thought that intentional bit of irony would help make the point rather than derail the request.

stavros
Well, here's an even more insane claim: I'm infinity times more productive, as I just wouldn't even start projects without the LLM to sidestep my ADHD. Then, when the LLM invariably fucks up, I step in and finish things myself!

Here are a few projects that I made these past few months that wouldn't have been possible without LLMs:

* https://github.com/skorokithakis/dracula - A simple blood test viewer.

* https://www.askhuxley.com - A general helper/secretary/agent.

* https://www.writelucid.cc - A business document/spec writing tool I'm working on, it asks you questions one at a time, writes a document, then critiques the idea to help you strengthen it.

* A rotary phone that's a USB headset and closes your meeting when you hang up the phone, complete with the rotary dial actually typing in numbers.

* Made some long-overdue updates on my pastebin, https://www.pastery.net, to improve general functionality.

* https://github.com/skorokithakis/support-email-bot - A customer support bot to answer general questions about my projects to save me time on the easy stuff, works great.

* https://github.com/skorokithakis/justone - A static HTML page for the board game Just One, so you can play with your friends when you're physically together, without needing to bring the game along.

* https://github.com/skorokithakis/dox - A thing to run Dockerized CLI programs as if they weren't Dockerized.

I'm probably forgetting a lot more, but I honestly wouldn't have been bothered to start any of the above if not for LLMs, as I'm too old to code but not too old to make stuff.

EDIT: dang can we please get a bit better Markdown support? At least being able to make lists would be good!

lisbbb
Did you make any money off any of that or was it all just labors of love type of stuff? I'm enjoying woodworking...
stavros
This is all my hobby, for my job I use Claude in a way that doesn't involve code, but is still very useful. It's basically what inspired Lucid, above, when I realized I find coming up with solutions very easy, but find explaining them very hard, because I assume the other person knows too much and I don't elaborate enough.

LLMs are a great rubber duck, plus they can write the document for you at the end.

Mathiciann
Well done, some of these projects look cool.

Although I was just commenting on the irony of the parent comment.

stavros
What was the irony? I thought you were referring to the "3x speed" part as the insane statement.
AppleBananaPie
To me it seems like an arbitrary number that I'm not even sure how someone could accurately measure it but maybe I've missed something :)
steveklabnik
> EDIT

hn has no markdown support at all right now. It's just this https://news.ycombinator.com/formatdoc

stavros
Hm, well, since we are on a Sonnet thread, I might give it a go.
FlyingSnake
> dang can we please get a bit better Markdown support?

Great use case for an LLM to make these changes as HN is open source. It’ll also tell us if LLMs can go beyond JS slop.

emp17344
> I'm infinity times more productive, as I just wouldn't even start projects without the LLM to sidestep my ADHD.

1 is not infinitely greater than 0.

sebastiennight
It... literally is?

Or otherwise, can you share what you think the ratio is?

emp17344
No, 1 is 1 more than 0. There’s a certain sense in which you could say that 1 is infinitely greater than 0, but only in an abstract, unquantifiable way. In this case, it doesn’t make sense to say you’re “infinitely more productive” because you’re producing something rather than nothing.
Fraterkes
I think it's a pedantic point, but maybe they just meant that talking about 1 being multitudes greater than 0 implies multiplication. And since 1/0 is undefined that doesn't make much sense.
inopinatus
Someone attributing all of their productivity to a given tool and none to their own ingenuity and experience is allocating 100% credit to that tool.

It is not a ratio, it is a proportion.

Also, not invented here syndrome (NIH) is cool again.

Given that most of the 'vibe-coded' projects that I have seen that are worse versions of software that have been tested and stand the test of time.

nenenejej
Everyone who wants to talk about claude code raise a Jira ticket with steps to reproduce and please link to that.
mbesto
> include some specifics of the task and prompt. Like actual prompt, actual bugs, actual feature, etc.

> I am still surprised at things it cannot do, for example Claude code could not seem to stitch together three screens in an iOS app using the latest SwiftUI (I am not an iOS dev).

You made a critical comment yet didn't follow your own rules lol.

> it's so helpful for meaningful conversation!

How so?

FWIW - I too have used LLMs for both coding and personal prompting. I think the general conclusion is that it when it works, it works well but when it fails it can fail miserably and be disastrous. I've come to conclusion because I read people complaining here and through my own experience.

Here's the problem:

- It's not valuable for me to print out my whole prompt sequence (and context for that matter) in a message board. The effort is boundless and the return is minimal.

- LLMs should just work(TM). The fact that they can fail so spectacularly is a glaring issue. These aren't just bugs, they are foundational because LLMs by their nature are probabilistic and not deterministic. Which means providing specific defect criteria has limited value.

cloverich OP
> How so?

Sure. Another article was posted today[1] on the subject. An example claim:

> If we asked the AI to solve a task that was already partially solved, it would just replicate code all over the project. We’d end up with three different card components. Yes, this is where reviews are important, but it’s very tiring to tell the AI for the nth time that we already have a Text component with defined sizes and colors. Adding this information to the guidelines didn’t work BTW.

This is helpful framing. I would say to this: I have also noticed this pattern. I have seen two approaches help. One, I break up UI / backend tasks. At the end of UI tasks, and sometimes before I even look at the code, I say: "Have you reviewed your code against the existing components library <link to doc>?" and sometimes "Have you reviewed the written code compared to existing patterns and can you identify opportunities for abstraction?" (I use plan mode for the latter, and review what it says). The other approach which I have seen others try, but have not myself (but it makes sense), is to automatically do this with a sub agent or hook. At a high level it seems like a good approach given I am manually doing the same thing now.

[1]: https://antropia.studio/blog/to-ai-or-not-to-ai/

danieloj
Could you share the actual examples of where you’re seeing the 3x output increase?
cloverich OP
Sure. This is an internal web app that uses react on the front end and rails on the back end. Typical examples I see LLM success with are writing and writing up routes/controllers/models, writing specs for those, abstracting components, writing front-end vitest/storybook entries. A typical request (filenames and such redacted) is like: "We recently added <link to model>. We refactored our approach for <goal> to <link to different model file>. We need to refactor <A> to be like <B> in these ways. Do that, then update the spec to match the pattern in <file Y>. Run rspec and rubocop when done, and address any issues". I then either wait or go do something else, then review the code and either ask for follow up, or fix minor issues. Sometimes it follows the wrong pattern and I ask it to adjust, or simply git checkout -- and say try again you did Y wrong.

Roughly speaking that is how I think through my work, and when I get to the point of actually writing the code having most of the plan (context) in my head, I simply copy that context to the LLM then go to do something else. I only do this if I believe the LLM can do it effectively, so some tasks I do not ask for help at all on (IMHO this is important).

I also have it help with scripts, especially script that munge and summarize data. I know SQL very very well, but find it still a bit faster to prompt the LLM if it has the schema on hand.

Do you find ^ helpful? i.e does that match how you prompt and if not, in what ways does it differ? If it does, in what ways do you get different results and at what step?

alfalfasprout
right? The irony is so thick you could cut it with a butter knife
not_kurt_godel
3 * 0 = 0.

Checkmate, aitheists.

bartread
I had a complete shocker with all of Claude, GitHub Copilot, and ChatGPT when trying to prototype an iOS app in Swift around 12 months ago. They would all really struggle to generate anything usable, and making any progress was incredibly slow due to all the problems I was running into.

This was in stark contrast to my experience with TypeScript/NextJS, Python, and C#. Most of the time output quality for these was at least usefully good. Occasionally you’d get stuck in a tarpit of bullshit/hallucination around anything very new that hadn’t been in the training dataset for the model release you were using.

My take: there simply isn’t the community, thought leadership, and sheer volume of content around Swift that there is around these other languages. This means both lower quantity and lower quality of training data for Swift as compared to these other languages.

And that, unfortunately, plays negatively into the quality of LLM output for app development in Swift.

(Anyone who knows better, feel free to shoot me down.)

simonh
Going from past discussions, there seem to be two issues there. One is that Swift has changed massively since it came out and huge swathes of examples and articles and such online, that LLMs are trained on, are out of date and thus pollute the training set.

Another issue is that Apple developer docs are largely sequestered behind JavaScript that makes them hard for scrapers to parse.

At least, those are the two explanations I’ve seen that seem plausible.

bartread
Yeah, I'm not a Swift expert by any means - this is literally something I spent a few days on - but this in particular:

> One is that Swift has changed massively since it came out and huge swathes of examples and articles and such online, that LLMs are trained on, are out of date and thus pollute the training set.

100% jibes with my experience. The amount of times it would generate code using a deprecated API, or some older mechanism, or mix an older idiom with a newer one... well, it was constant really.

And a lot of Googling when I was fixing everything up manually drew me toward this same conclusion: that high quality, up to date information on Swift was in relatively short supply compared to other languages. Couple that with a lower volume of content across all Swift versions and you end up with far from great training data leading to far from great outputs.

> Apple developer docs are largely sequestered behind JavaScript that makes them hard for scrapers to parse.

Yeah, and honestly - even if there's a solution here - the documentation isn't that great either. Certainly not compared with .NET, Ruby, Python, TypeScript, etc.

If I were a vibe coder I'd certainly avoid Swift like the plague.

(Btw, this isn't a knock on Swift itself: as a language I didn't mind it, although I did notice when debugging that the Objective C underpinnings of many APIs are often on display.)

fnordsensei
As someone who gets useful Clojure out of Claude quite consistently, I’m not sure that volume is the only reason for output quality.
resters
I think what you are saying is true for CLI-only development using Swift. It is possible, but LLMs often get the commands wrong or don't realize how to accomplish something. There have been a number of times when claude/codex has told me I have to edit a plist manually in XCode before progress can continue.
This is more or less my experience with Go right now.

For a bunch of reasons I want to avoid the standard React, Typescript, and Node stack but the sheer velocity that might enable from the LLM side might make it worth it.

nerdix
Wait...

Are you saying that your experience with Go has been bad? I would think Go would be as good as any other language (if not better). The language itself is simple, the Go team is very methodical about adding new features so it changes fairly slowly, it has excellent built in CLI based tooling that doesn't require third party packages or applications, and there are plenty of large open source Go codebases to train on. Seems like the perfect language for agentic tools.

Yep my experience has been pretty bad. As in Claude with Opus can rarely produce even compiling code in my particular project (a year old, mid-complexity one). This is with adhering to best practices including a robust Claude.md and detailed PRD's.
emil-lp
How do you measure 3x sustained output increase?

Is it number of lines? Tickets closed? PRs opened or merged? Number of happy customers?

hshshshshsh
All these are useless metrics. It doesn't say anything meaningful on the quality of your life. I would be more interested in knowing if he can now retire in next 5 years instead of waiting another 15?

Or do he now just just get to work for 2 hours and enjoy the remaining 6 hours doing meaningful things apart from staring at a screen.

simonh
Not everyone hates their job and gets no satisfaction from it. Some of us relish doing something useful and getting paid for it.
hshshshshsh
Sure. I don't doubt it. But let's say if I can make a 100 million pounds appears on your bank account tomorrow. Will you say no to it and go back to your day job?
simonh
Both can be true. Being able to do better, more productively work can increase my quality of life. And yes, winning lottery millions would increase my quality of life even more.

However I don’t have lottery millions, but I do have a job and I would like to be able to do it better.

fragmede
Can you though? What you can do, though, is quit that job you hate and go do something (anything!) else until you find what's right for you.
hshshshshsh
Obviously I don't. But I was merely pointing at the fact that people don't really love their job but has somehow invented a story that make them believe they do.
cloverich OP
Merged PRs. We typically plan out our work, break up into e.g. JIRA tasks, then when we create PR's _very generally_ they should be tied to actual JIRA tickets i.e. pre-planned work. A ticket is usually a requested feature or bug (as reported by an actual user). So my PR rate, or perhaps less controversially my JIRA close rate, is around 3x higher for the last few months. That's also reflected more generally in my feedback productivity wise (i.e. people that are looking at the project as a whole rather than e.g. how many commits I've made). I exclude from 3x side projects and CLI tools, which are weird to quantify - they are typically things that would usually have been ideas in my head I never did at all. I guess I also generally exclude refactoring although I do that more. For example I had claude fix a bug that was dogging our typescript compilation. I couldn't figure out what was so slow about it (>60s to compile). Turned out it was a specific recursive type pulled in by a specific version of a library mixed by usage from one file! It actually took it a while to figure it out, it kept proposing solutions and I had to re-direct it a bunch, using mostly just intuition as opposed to experience. e.g. "No, re-run the diagnostics and look at the debug output, give me three examples of area / commands you could look at and how" and then I'd pick one. I just did that task on the side, I'd go back and look at it output once every day or two, then prompt it with something else, then just go do my usual tasks as though that didn't exist. That type of work given our pace / deadlines / etc, might never have gotten done at least not anytime soon. But I do stuff like that all the time now, I just don't often measure it.

Is that helpful?

senordevnyc
Oh good, a new discussion point that we haven't heard 1000x on here.

Have you heard of that study that shows AI actually makes developers less productive, but they think it makes them more productive??

EDIT: sorry all, I was being sarcastic in the above, which isn't ideal. Just annoyed because that "study" was catnip to people who already hated AI, and they (over-) cite it constantly as "evidence" supporting their preexisting bias against AI.

rapind
> Have you heard of that study that shows AI actually makes developers less productive, but they think it makes them more productive??

Have you looked into that study? There's a lot wrong with it, and it's been discussed ad nauseam.

Also, what a great catch 22, where we can't trust our own experiences! In fact, I just did a study and my findings are that everyone would be happier if they each sent me $100. What's crazy is that those who thought it wouldn't make them happier, did in fact end up happier, so ignore those naysayers!

inopinatus
It is undoubtedly 3x as many bugs.
_alternator_
This would be a win. Professionals make about 1 bug for every 100 loc. If you get 3x the code with 3x the bugs, this is the definition of scaling yourself.
lottin
I think it's just a meaningless sentence.
senordevnyc
HN is such a negative and cynical place these days that it's just not worth it. I just don't have the patience to hear yet another anti-AI rant, or have someone who is ideologically opposed to AI nitpick its output. Like you, I've found AI to be a huge help for my work, and I'm happy to keep outcompeting the people who are too stubborn to approach it with an open mind.
la_fayette
I think HN might be one of the few communities where people have been running extensive experiments with LLMs since their inception. Most here take a realistic view of their capabilities. There are certainly proven use cases where LLMs provide clear productivity gains—for example, copying an error message and retrieving potential solutions. At the same time, many recognize that marketing fantasies, such as the idea of having a "PhD in your pocket," are far beyond what this technology can deliver.
xenobeb
To me, it really depends if the post is a well reasoned criticism with something unique to add to the conversation or the standard, completely pointless, anti-AI rant that I have already read a 1000 times.
sciencejerk
5I think a lot of white-collar workers are on HN and th3y
catigula (dead)
kelsey98765431
all major nation state intelligence services have an incentive to spread negative sentiment and reduce developer adoption of ai technology as they race to catch up with the united states.
emp17344
[flagged]
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html

scrollaway
GP is right, though. Many programming communities, including (in some threads, but not all) HN, have become ridiculous anti-AI bubbles - what's the point of trying to have a discussion if you're going to get systematically shut down by people whose entire premise is that they don't use it? It's like trying to explain color to the blind.

What "discussion" do you want to have? Another round of "LLMs are terrible at embedded hardware programming ergo they're useless"? Maybe with a dash of "LLMs don't write bug-free software [but I do]" to close it off?

The discussions that are at all advancing the state of the art are happening on forums that accept reality as a matter of fact, without people constantly trying to constantly pretend things because they're worried they'll lose their job if they don't.

emp17344
I think you’re overly sensitive to criticism of LLMs.
scrollaway
No? I really don't give a crap what people criticize. It doesn't change anything in my life - I have plenty going on and nothing you or anyone says here will alter that. It's just sad to see a community I like (and which I've been a part of for longer than you've been programming) factually shut itself down to reality...
red_rech (dead)
boogieknite
> for example Claude code could not seem to stitch together three screens in an iOS app using the latest SwiftUI

have you tried in the new xcode extension? that tool is surprisingly good in my limited use. one of the few times xcode has impressed me in my 2 yeasrs of use. read some anecdotes that claude in the xcode tool is more accurate than standard claude code for Swift. i havent noticed that myself but only used the xcode tool twice so far

> based on your actual work, include some specifics of the task and prompt.

Can't show prompts and actual, real work, because, well, it's confidential, and I'd like to get a paycheck instead of a court summons sometime in the next two weeks.

Generally, 'I can't show you the details of my work' isn't a barrier in communicating about tech, because you can generalize and strip out the proprietary bits, but because LLM behavior is incredibly idiosyncratic, by the time you do that, you're no longer accurately communicating the problem that you're having.

Jonovono
I had Claude Code build a fairly complex swiftui app (5+ screens), using Firebase AI Logic and other packages. First prompt it got pretty much foundation for the entire thing set up, then over the next day got it working exactly like I wanted. The thing that took the longest was getting through app review. I was impressed how well it knew SwiftUI and Swift composable architecture.
cloverich OP
For my iOS project, I am super curious to what extent is it my lack of swift knowledge and e.g. how well I can prompt? Because 80% of what I usually ask the LLM to do, I know how to do myself quite well. iOS is the first time I've been coding with something I do not know how to do well, I often can barely read the code (of course that is changing rapidly now). e.g. from a recent session:

> What is the idiom for testing the launch screen on the simulator like.. I don't see anything? How do I know if its there.

i.e. in iOS / Swift, I don't even know if I'm using the right terms for the code I am trying to interrogate, or in some cases even what the thing is!

Jonovono
I have done lots of SwiftUI before, so it may have helped me recognize when it goes off the rails. But I definitely don't do anything fancy with my prompting.

But for stuff like TCA (Swift composable architecture), I basically created a TCA.md file and pasted in a bunch of docs and examples and would reference that.

But for the most part, it was one shotting swiftui screens that were nicer than what I had in my mind.

cpursley
Apple store link (I believe you, just am curious)? I'm toying with the idea of "vibing" a real Swift app instead of messing with the React Native toolchain.
FrustratedMonky
New Claude Model Runs 30-Hour Marathon To Create 11,000-Line Slack Clone

https://www.theverge.com/ai-artificial-intelligence/787524/a...

Yeah, maybe it is garbage. But it is still another milestone, if it can do this, then it probably does ok with the smaller things.

This keeps incrementing from "garbage" to "wow this is amazing" at each new level. We're already forgetting that this was unbelievable magic a couple years ago.

bigyabai
> for example Claude code could not seem to stitch together three screens in an iOS app using the latest SwiftUI

That's... not super surprising? SwiftUI changes pretty dang often, and the knowledge cutoff doesn't progress fast enough to cover every use-case.

I use Claude to write GTK interfaces, which is a UI library with a much slower update cadence. LLMs seem to have a pretty easy time working with bog-standard libraries that don't make giant idiomatic changes.

Would you be so kind to lead by example?

What are the specific tasks + prompts giving you an 3x increased output, and conversely, what tasks don't work at all?

After an admittedly cursory scan of your blog and the repos in your GH account I don't find anything in this direction.

cloverich OP
Oh, 3x at work. I shared some details on the methodology, its PR rate for ticketed features / bugs (so e.g. closed tickets as opposed to commits, loc, etc). For prompts and tasks, am happy to share (redacted as needed; check comment threads) if you want more details, presuming this is a genuine request? Here's a few example prompts (I can't paste exactly obviously, but I can approximate):

    - "Rails / sidekiq: <x file> uses sidekiq batches. <y file> does it. Refactor your to use pattern in <x file> Match spec in <z file> then run rspec and rubocop"
    - "Typescript / react. <x file>. Why is typescript compilation a bottle neck int his file. Use debugger to provide definitive evidence. Cast type to any and run script and time it; write a script to measure timing if needed. Iteratively work from type `any` to a real type and measure timing at each step. Summarize results"
    - "I redefine <FormComponent> in five places. Find them all. Identify the shared patterns. Make new component in <x location>. Refactor each to to use new component. Run yarn lint and fix any ts issues when done"
    - "<file y>: more idiomatic" (it knows my preferences)



Side projects and such I have no idea, and (as you noted) I do those quite infrequently anyways! Actually come to think of it... outside of the toy iOS work I did last week, I've not actually worked on my side projects since getting into Claude code / cursor agents. For work stuff, I guess other metrics I'd be interested in are total messages sent per task. I do sometimes look at $ per task (but for me anyways, that's so wildly in my favor I don't think it's worth it".
Would you say you do things you'd normally do 3 times faster? Or does it help you move past the things you'd get stuck on or avoid in the past, resulting in an overall 3x speedup?
cloverich OP
Things I'd normally do 3x faster. That 3x is me focusing explicitly on the precise things I did before - the PR rate on a specific work project - because I tie those PR's back to specific tasks the same as I did before I used claude code. I haven't looked at lines of code, total commits, etc. Qualitatively I write more tests and abstract more components than I used to, but those get lumped in to the PRs as I normally try to limit pure refactoring work, and instead tie it into ticketed feature requests or bugs.

I don't count the things I'm doing now that I would have avoided or never finished in the past. For those, of course to me personally those are worth much more psychologically than 3x, but who knows if it's an actual boost. I.e. I took a partially scripted task the other day and fully automated it, and also had it output to the CLI in a kind of dorky sci-fi way because it makes it fun to run it. It didn't take long - 30 minutes? But I certainly didn't _gain_ time doing that, just a little more satisfaction. TBH I'm surprised 3x is so controversial, I thought it was a really cool and far more practical assessment than some of these 10x claims I'm seeing.

raincole
I agree. I think we can start with cloverich including some specifics of the task and prompt.
this is a great copypasta
rightbyte
I was thinking the same. Way too perfect to not be spammed around forever.
AnotherGoodName
Definitely an overall positive with the negatives actually being kind of hilarious and no big deal which I'll also discuss.

I can only list my open source outputs concretely for obvious reasons but https://github.com/rubberduckmaths/reddit_terraforming_mars_... was a near one shot. It's a Reddit bot that posts card text to the Terraforming Mars subreddit when asked which is helpful for context on discussions of that board game. Appreciated and used a lot by the community there. There's a similar project i used AI for to scrape card text that was also near one shot. I'd say for these two hobby projects 50x productivity is a reasonable statement. I wrote Reddit bots ~10 years ago without coding assistance - https://github.com/AReallyGoodName/xwingminibot i get to reasonably absolutely compare two very similar projects. I think it's totally fair for me to say 50x for this example. The Reddit API even changed completely in that time so no one can really say "you used past experience to move faster, it's not the ai giving a 50x boost" but I really didn't. My memory is not that good except for memory of an entire weekend previously vs <30mins total now using a bot to one shot some pretty cool projects.

As for the negatives they are never serious. A couple of good examples;

"Please correct all lint errors in this project" only to have @lintignore added to all files. Lol! Obviously i just more clearly specified the prompt and it's not like it's hard to catch these things and not ship to prod. It was funny to everyone i showed and no big deal.

Another similar case, "please make the logging of this file less verbose especially around the tight loop on line X". Instead of changing log level or removing some of the log statements the ai redirected stdout at the initialization of the command line program (would completely break it of course). Again hilarious but also not big deal. Not even much of a waste of time since you just change the prompt and run again and honestly a few silly diversions like this now and then is kind of fun. As in the comments of "OMG AI sometimes gets it wrong" aren't at all serious. I have version control, i review code. No big deal.

I too eye roll massively at some of the criticisms at this point. It's like people are stretching to claim everyone who's using a coding assistant is newb who's throwing everything into prod and deleting databases etc. That's just not reality.

criley2
"Please include ACTUAL EVIDENCE!"

"I tripled my output (I provide no evidence for this claim)"

Never change, HN.

This item has no comments currently.