Preferences

> Previously if I wanted to know about the French Revolution, I would have Googled it, and likely landed on Wikipedia and maybe 2 or 3 specialty sites. Now, I just ask ChatGPT

This is a common sentiment. I have been using chatgpt for various things, and there are times when I will turn to it for technical help, sysadmin or figuring out some shell / cli commands. However by and large I definitely still prefer googling something. I would much rather read a wikipedia article of the French Revolution than trust whatever chatgpt had to say about it. I figure chatgpt is just spitting out the very same wikipedia article with some rewording and the occasional hallucination and I would rather just read straight from the source. Same with programming questions - between reading a stack overflow thread with the answer and chatgpt I feel a lot better reading stackoverflow, seeing the context the replies, the alternatives etc. and deciding based on that rather than just trusting gpt.


This is probably going to really bug people, but it seems foolish to use chatgpt as your search engine. You really have to have a more critical eye than say looking at wikipedia or a search result from a reputable website.

I was telling both my son and my father about how chatgpt is not a search engine, and they started testing things and .... well the obvious happened, they easily found mistakes. And they stopped using it so much. They had no idea it wasn't kind of a search engine.

I can ask it questions and get amazing answers, I see the use of it. But it's not a search engine replacement, you have to double check things with search engines. A search engine might return a result with a bogus webpage of course.

People don't know that it's not a search engine! It's much more, but it's also a bit less ;-)

Search engines also return websites with mistakes, sometimes even the stuff they quote directly in their search. But the good part is that they can put the blame onto the individual website if it's wrong. Similarly, you can also use the kind of website to figure out if it's a good source or not.

I don't like the LLM interface that it's all filtered to this super friendly, grammatically correct reply that might however make no actual sense logically, or be plain wrong. With websites I see tons of signals I can use to determine how much value I should put to some information, I can compare different websites, etc.

Also, it's super slow to return a response. Maybe I'm old, but for most of the tasks I use search engines for, I wouldn't use LLM's.

When someone writes or says that they use ChatGPT for a search engine, that just tells me they aren’t serious about facts or education. Anyone who invests two minutes in crosschecking will easily find disturbing misinformation in its “search results.”

Meanwhile, Duck Duck Go provides reasonable sources.

After rather extensive testing, my heuristic is never to ask ChatGPT for facts about anything that can’t be readily verified— and even then don’t trust it until it is verified. The act of verification makes ChatGPT too time consuming to be worth using for education.

It’s useful when I’m trying to solve certain programming problems, though.

Yep - I love ChatGPT for “conversation starter” analysis and ideas. I think it works well to help me figure out what my question is (a program isn’t working and the error message is generic), or to refine something I’ve written or test me with questions.

I cannot imagine using it as a primary source for literally anything, ever.

ChatGPT helps me get an initial grasp of the topic from which I can come up with more questions and research to do on my own. Usually I don't use ChatGPT as a source but rather a springboard for ideas.
This is what was often said about Wikipedia. Don't use it as main source of information, but using it as a starting point is fine (if it's just to get a list of sources it quotes).
This begs the question; Is stills still how people use Wikipedia?
I see no difference here

If I just want to learn conceptual knowledge, having a conversation with ChatGPT isn't much different from doing a DFS through hyperlinks on Wikipedia. In fact, using ChatGPT can be more efficient.

If I want to critically understand the details, Wikipedia, aside from providing references, doesn’t offer much more help than ChatGPT. More importantly, just as LLMs can exhibit hallucinations, community content can also contain biases or even errors.

in the end, it still comes down to relying on oneself.

Nearly everything I've ever asked chatgpt about astrophysics has been critically wrong in some fashion or other. The issue is is that all its training data is polluted by popsci

Eg I just asked chatgpt about the metric expansion of space, and its answer is the common popsci answer, that's incorrect

Next, I asked it what the general equation for general relativistic redshift in an arbitrary spacetime is. It answered

1+z = a(trec) / a(temit)

Where it lists a(t) as the scale factor of the universe. This is not what I asked it, and is the incorrect equation

Next I asked it if the adm formalism is strictly equivalent to general relativity. It said that it is strictly equivalent. This is incorrect, as ADM implies a restriction on the change of the topology of spacetime. A lot of the rest of its explanation of ADM is incorrect

Next I asked it if the ADM formalism is covariant, which it is. ChatGPT says no, because of the foliation of spacetime. I asked it if covariant BSSN is covariant, and it said yes, while it listed reasons why BSSN isn't covariant (which it isn't) that had nothing to do with why it isn't covariant

Then I asked chatgpt if the ADM formalism is more stable the cBSSN formalism. It said that ADM is more stable for highly dynamical systems like binary black hole mergers, which is factually incorrect - it cannot be used for binary black hole mergers. It attributes this to the absence of a clear slicing structure - which is incorrect as they both use the same gauge conditions

I then asked it if the Z4 formalism is generally covariant. It said that it, and adm (which it earlier said was not covariant), are both generally covariant. Now on the plus side its right about z4, but my goodness

Then I asked it if the ADM formalism is hyperbolic. It said no, because its second order in time and space, but if it were first order in time and space, it would be. This is incorrect. ADM is weakly hyperbolic, but strongly hyperbolic systems can be first order in time, and second order in space. It also states:

>This involves introducing auxiliary variables (such as the conjugate momenta of the metric components)

The conjugate momenta is one of the fundamental variables of ADM, so its not an auxiliary variable

--fin--

Literally everything I asked chatgpt here gave a critically wrong answer, from the basic to the complex, and I'm super glossing over the walls of text which are significantly wrong in virtually all the details. Its frustrating because its not saying its not sure, its just confidently incorrect

Information on wikipedia is generally much more accurate, and much more verifiable because it provides links and sources for you to check yourself. Don't use chatgpt for anything

well, that's the details

For every piece of content like this generated by an LLM, the first thing I'll do is validate it.

By "conceptual knowledge", I mean something like "why the moon keeps one side towards the Earth" and "what is tidal locking."

Its hard to overstate just how wrong chatgpt was about everything that I asked it - from the broad explanations, to the details of it. If you ask it "why does the universe expand" (which is a basic question), it'll be wrong

I can often find the sources its mashing together a lot of the time. You can see its repeating 50% of a sentence here, and combining it together with 50% of a sentence there, to make something that's 100% wrong because its not a tool that understands context

For me it feels actively like the opposite of learning. With a human author, there's a chain of reasoning to bring you to a logical conclusion. Even if that conclusion is wrong, there's value in trying to understand what their reasoning is. ChatGPT finds it impossible to author a structured argument, and its logic is a mess of nonsequiturs that there's no value in trying to understand. It makes it a very poor tool for information gathering imo

I tried to ask one of your questions using DeepSeek with deep thinking, it answered:

> Okay, the user is asking if the ADM formalism is strictly equivalent to General Relativity. Let me start by recalling what the ADM formalism is. ADM stands for Arnowitt, Deser, and Misner, who developed a way to reformulate Einstein's equations into a Hamiltonian framework. This is useful for things like numerical relativity and quantum gravity.

So, first, I need to confirm whether ADM is just a reformulation of GR or if it changes the theory. From what I remember, ADM splits spacetime into space and time, using a 3+1 decomposition. They introduce variables like the lapse function and shift vector to handle the foliation of spacetime into spatial hypersurfaces.

Wait, does this decomposition change the physics? Or is it just a different mathematical approach? I think it's the latter. The Einstein equations in ADM form should be equivalent to the original ones, just expressed in terms of different variables. But I should check if there are any subtleties, like constraints or boundary conditions that might affect equivalence.

Also, the user might be confused because sometimes when you change the formalism, especially in constrained systems, there can be issues. For example, in ADM, you have the Hamiltonian and momentum constraints. These need to be satisfied at all times, and numerical implementations have to ensure that. But theoretically, as long as the constraints are satisfied, the ADM equations should be equivalent.

Another point is the initial value problem. ADM is used for that, which suggests it's a way to set up initial data consistent with GR. If the equivalence wasn't strict, then initial data in ADM wouldn't correspond properly to GR solutions. But I think they do, as long as the constraints are met.

However, I should mention that while the formalism is equivalent, practical implementations (like in numerical relativity) might face issues like constraint violation over time, which aren't a problem with the theory itself but with the method of solving the equations.

Also, maybe the user is coming from a background where they heard about different formulations of GR, like tetrad formalism or others, and is wondering if ADM is another such equivalent approach. So, confirming that ADM is a Hamiltonian formulation that doesn't alter the physical content of GR, just the mathematical structure.

Need to make sure to explain that equivalence here means the solutions to the equations are the same, and any differences are in the mathematical handling, not the physical predictions. Also, perhaps touch upon the applications where ADM is useful, to highlight why this reformulation is valuable even though it's equivalent.

I should also check if there's any gauge freedom or choice in the ADM formalism that might lead to different interpretations. For example, the lapse and shift are gauge variables, but that's part of the coordinate freedom already present in GR. So, choosing different lapses and shifts corresponds to different coordinate choices in standard GR, which doesn't affect the physical equivalence.

In summary, the ADM formalism is a reformulation that breaks down Einstein's equations into a set of constrained evolutionary equations. The physical content remains the same as GR, so they are strictly equivalent in terms of physical predictions, but the mathematical approach is different, suited for certain types of problems like numerical simulations or canonical quantization.

---

It's far far beyond my capability to validate if it's right, but I think it would be helpful if the LLM can explain how it thinks?

I agree LLM often being confidently incorrect, it often gives me code that completely un-compilable, users need to review its content very carefully, and we need to do that for content from other sources also

What I want to explain is that LLMs are helpful at many scenarios(not as hardcore as astrophysics, for example, there's an article about myths of seven sister https://www.hackerneue.com/item?id=42809925

And I want to know if there're other "sisters-like" myths, I can not get the right keyword to search, and then LLM helps, and I can go to do source check for the given answer by its keywords

So what it misses is that the ADM formalism is a subset of GR, in that it can only simulate a subset of the full equations for GR. It also gives a lot of reasoning there that is quite faulty with respect to answer the question of if its strictly equivalent

>Wait, does this decomposition change the physics? Or is it just a different mathematical approach? I think it's the latter

This for example is faulty reasoning, because its not an either or situation

>I should check if there are any subtleties, like constraints or boundary conditions that might affect equivalence

This is also faulty reasoning, because it assumes that non equivalence must mean that they produce difference physics

>Another point is the initial value problem. ADM is used for that, which suggests it's a way to set up initial data consistent with GR. If the equivalence wasn't strict, then initial data in ADM wouldn't correspond properly to GR solutions. But I think they do, as long as the constraints are met.

Notice how its trying to sneak a fast one past you. Its saying that because ADM with valid initial conditions maps to GR, therefore GR maps to ADM - which is a logical fallacy! It makes this mistake several times in its chain of reasoning

The issue is its not doing any real reasoning, and isn't capable of actually making deductions. Its about as productive as just.. throwing all the pieces of information on the floor, and picking them up in a random order

This isn't really getting into the weeds with astrophysics, its ADM 101, and deepseek is just.. lying right to your face with confidence, presented under the guise of a chain of reasoning

I enjoy phind.com if I want an AI to summarize a topic. Because while it will give me a nice LLM summary, it will also cite its sources.

Whatever particular aspect of the summary intrigues me is what I'll then go look at the original source for.

Citation is very important for verifying the response is grounded. Google search does this too.
I would explicitly go for private search instead of LLM

I do translations of long podcasts from time to time and it's never accurate and often produces hallucination(I know enough in both languages just using LLM to speed up

If I want to know about the French Revolution it’s still Google and Wikipedia. If I want a list of famous characters involved in the French Revolution then it’s definitely chatGPT.
I only ask LLMs if I want to get a quick fact that isn't that important. For actual reading and knowing the information bit I still defer to Wikipedia.
Shout out to duck.com My default

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal