I was telling both my son and my father about how chatgpt is not a search engine, and they started testing things and .... well the obvious happened, they easily found mistakes. And they stopped using it so much. They had no idea it wasn't kind of a search engine.
I can ask it questions and get amazing answers, I see the use of it. But it's not a search engine replacement, you have to double check things with search engines. A search engine might return a result with a bogus webpage of course.
People don't know that it's not a search engine! It's much more, but it's also a bit less ;-)
I don't like the LLM interface that it's all filtered to this super friendly, grammatically correct reply that might however make no actual sense logically, or be plain wrong. With websites I see tons of signals I can use to determine how much value I should put to some information, I can compare different websites, etc.
Also, it's super slow to return a response. Maybe I'm old, but for most of the tasks I use search engines for, I wouldn't use LLM's.
Meanwhile, Duck Duck Go provides reasonable sources.
After rather extensive testing, my heuristic is never to ask ChatGPT for facts about anything that can’t be readily verified— and even then don’t trust it until it is verified. The act of verification makes ChatGPT too time consuming to be worth using for education.
It’s useful when I’m trying to solve certain programming problems, though.
I cannot imagine using it as a primary source for literally anything, ever.
If I just want to learn conceptual knowledge, having a conversation with ChatGPT isn't much different from doing a DFS through hyperlinks on Wikipedia. In fact, using ChatGPT can be more efficient.
If I want to critically understand the details, Wikipedia, aside from providing references, doesn’t offer much more help than ChatGPT. More importantly, just as LLMs can exhibit hallucinations, community content can also contain biases or even errors.
in the end, it still comes down to relying on oneself.
Eg I just asked chatgpt about the metric expansion of space, and its answer is the common popsci answer, that's incorrect
Next, I asked it what the general equation for general relativistic redshift in an arbitrary spacetime is. It answered
1+z = a(trec) / a(temit)
Where it lists a(t) as the scale factor of the universe. This is not what I asked it, and is the incorrect equation
Next I asked it if the adm formalism is strictly equivalent to general relativity. It said that it is strictly equivalent. This is incorrect, as ADM implies a restriction on the change of the topology of spacetime. A lot of the rest of its explanation of ADM is incorrect
Next I asked it if the ADM formalism is covariant, which it is. ChatGPT says no, because of the foliation of spacetime. I asked it if covariant BSSN is covariant, and it said yes, while it listed reasons why BSSN isn't covariant (which it isn't) that had nothing to do with why it isn't covariant
Then I asked chatgpt if the ADM formalism is more stable the cBSSN formalism. It said that ADM is more stable for highly dynamical systems like binary black hole mergers, which is factually incorrect - it cannot be used for binary black hole mergers. It attributes this to the absence of a clear slicing structure - which is incorrect as they both use the same gauge conditions
I then asked it if the Z4 formalism is generally covariant. It said that it, and adm (which it earlier said was not covariant), are both generally covariant. Now on the plus side its right about z4, but my goodness
Then I asked it if the ADM formalism is hyperbolic. It said no, because its second order in time and space, but if it were first order in time and space, it would be. This is incorrect. ADM is weakly hyperbolic, but strongly hyperbolic systems can be first order in time, and second order in space. It also states:
>This involves introducing auxiliary variables (such as the conjugate momenta of the metric components)
The conjugate momenta is one of the fundamental variables of ADM, so its not an auxiliary variable
--fin--
Literally everything I asked chatgpt here gave a critically wrong answer, from the basic to the complex, and I'm super glossing over the walls of text which are significantly wrong in virtually all the details. Its frustrating because its not saying its not sure, its just confidently incorrect
Information on wikipedia is generally much more accurate, and much more verifiable because it provides links and sources for you to check yourself. Don't use chatgpt for anything
For every piece of content like this generated by an LLM, the first thing I'll do is validate it.
By "conceptual knowledge", I mean something like "why the moon keeps one side towards the Earth" and "what is tidal locking."
I can often find the sources its mashing together a lot of the time. You can see its repeating 50% of a sentence here, and combining it together with 50% of a sentence there, to make something that's 100% wrong because its not a tool that understands context
For me it feels actively like the opposite of learning. With a human author, there's a chain of reasoning to bring you to a logical conclusion. Even if that conclusion is wrong, there's value in trying to understand what their reasoning is. ChatGPT finds it impossible to author a structured argument, and its logic is a mess of nonsequiturs that there's no value in trying to understand. It makes it a very poor tool for information gathering imo
> Okay, the user is asking if the ADM formalism is strictly equivalent to General Relativity. Let me start by recalling what the ADM formalism is. ADM stands for Arnowitt, Deser, and Misner, who developed a way to reformulate Einstein's equations into a Hamiltonian framework. This is useful for things like numerical relativity and quantum gravity.
So, first, I need to confirm whether ADM is just a reformulation of GR or if it changes the theory. From what I remember, ADM splits spacetime into space and time, using a 3+1 decomposition. They introduce variables like the lapse function and shift vector to handle the foliation of spacetime into spatial hypersurfaces.
Wait, does this decomposition change the physics? Or is it just a different mathematical approach? I think it's the latter. The Einstein equations in ADM form should be equivalent to the original ones, just expressed in terms of different variables. But I should check if there are any subtleties, like constraints or boundary conditions that might affect equivalence.
Also, the user might be confused because sometimes when you change the formalism, especially in constrained systems, there can be issues. For example, in ADM, you have the Hamiltonian and momentum constraints. These need to be satisfied at all times, and numerical implementations have to ensure that. But theoretically, as long as the constraints are satisfied, the ADM equations should be equivalent.
Another point is the initial value problem. ADM is used for that, which suggests it's a way to set up initial data consistent with GR. If the equivalence wasn't strict, then initial data in ADM wouldn't correspond properly to GR solutions. But I think they do, as long as the constraints are met.
However, I should mention that while the formalism is equivalent, practical implementations (like in numerical relativity) might face issues like constraint violation over time, which aren't a problem with the theory itself but with the method of solving the equations.
Also, maybe the user is coming from a background where they heard about different formulations of GR, like tetrad formalism or others, and is wondering if ADM is another such equivalent approach. So, confirming that ADM is a Hamiltonian formulation that doesn't alter the physical content of GR, just the mathematical structure.
Need to make sure to explain that equivalence here means the solutions to the equations are the same, and any differences are in the mathematical handling, not the physical predictions. Also, perhaps touch upon the applications where ADM is useful, to highlight why this reformulation is valuable even though it's equivalent.
I should also check if there's any gauge freedom or choice in the ADM formalism that might lead to different interpretations. For example, the lapse and shift are gauge variables, but that's part of the coordinate freedom already present in GR. So, choosing different lapses and shifts corresponds to different coordinate choices in standard GR, which doesn't affect the physical equivalence.
In summary, the ADM formalism is a reformulation that breaks down Einstein's equations into a set of constrained evolutionary equations. The physical content remains the same as GR, so they are strictly equivalent in terms of physical predictions, but the mathematical approach is different, suited for certain types of problems like numerical simulations or canonical quantization.
---
It's far far beyond my capability to validate if it's right, but I think it would be helpful if the LLM can explain how it thinks?
I agree LLM often being confidently incorrect, it often gives me code that completely un-compilable, users need to review its content very carefully, and we need to do that for content from other sources also
What I want to explain is that LLMs are helpful at many scenarios(not as hardcore as astrophysics, for example, there's an article about myths of seven sister https://www.hackerneue.com/item?id=42809925
And I want to know if there're other "sisters-like" myths, I can not get the right keyword to search, and then LLM helps, and I can go to do source check for the given answer by its keywords
>Wait, does this decomposition change the physics? Or is it just a different mathematical approach? I think it's the latter
This for example is faulty reasoning, because its not an either or situation
>I should check if there are any subtleties, like constraints or boundary conditions that might affect equivalence
This is also faulty reasoning, because it assumes that non equivalence must mean that they produce difference physics
>Another point is the initial value problem. ADM is used for that, which suggests it's a way to set up initial data consistent with GR. If the equivalence wasn't strict, then initial data in ADM wouldn't correspond properly to GR solutions. But I think they do, as long as the constraints are met.
Notice how its trying to sneak a fast one past you. Its saying that because ADM with valid initial conditions maps to GR, therefore GR maps to ADM - which is a logical fallacy! It makes this mistake several times in its chain of reasoning
The issue is its not doing any real reasoning, and isn't capable of actually making deductions. Its about as productive as just.. throwing all the pieces of information on the floor, and picking them up in a random order
This isn't really getting into the weeds with astrophysics, its ADM 101, and deepseek is just.. lying right to your face with confidence, presented under the guise of a chain of reasoning
Whatever particular aspect of the summary intrigues me is what I'll then go look at the original source for.
I do translations of long podcasts from time to time and it's never accurate and often produces hallucination(I know enough in both languages just using LLM to speed up
This is a common sentiment. I have been using chatgpt for various things, and there are times when I will turn to it for technical help, sysadmin or figuring out some shell / cli commands. However by and large I definitely still prefer googling something. I would much rather read a wikipedia article of the French Revolution than trust whatever chatgpt had to say about it. I figure chatgpt is just spitting out the very same wikipedia article with some rewording and the occasional hallucination and I would rather just read straight from the source. Same with programming questions - between reading a stack overflow thread with the answer and chatgpt I feel a lot better reading stackoverflow, seeing the context the replies, the alternatives etc. and deciding based on that rather than just trusting gpt.