- LiKaoThey might behave like ChatGPT when queried about the seahorse emoji, which is very similar to an existential crisis.
- There actually was quite a lot of suggestion that thoughts work like autocomplete. A lot of it was just considered niche, e.g. because the mathematical formalisms were beyond what most psychologist or even cognitive scientists would deem usefull.
Predictive coding theory was formalized back around 2010 and traces it roots up to theories by Helmholtz from 1860.
Predictive coding theory postulates that our brains are just very strong prediction machines, with multiple layers of predictive machinery, each predicting the next.
- Look up predictive coding theory. According to that theory, what our brain does is in fact just autocomplete.
However, what it is doing is layered autocomplete on itself. I.e. one part is trying to predict what the other part will be producing and training itself on this kind of prediction.
What emerges from this layered level of autocompletes is what we call thought.
- Tests usually do measure the speed. And often they should. But the question here is "the speed of what?". And how do you measure the speed without also measuring the speed of something else as an error?
If you just want to measure speed, we should clock the time the student gets up, until they get to the room where the test is, get's out his pen etc. So students get the same time to do all this.
We are now measuring the speed at which the student is able to do the test material including all the preparatory steps. Students who live further away or have slower cars will get worse grade, but we are just measuring speed, aren't we?
That is a deliberately stupid example, but it shows that is important to ask "speed of what?". When doing a physics exam, what do we want to include in our measurement? The time it takes the person to read an write? Or just the raw speed at which physics knowledge can be applied? What is error and what is measurement?
You can see it as measuring based on different criteria. Or you can see it as trying to get rid of sources of errors that may be vastly different for different students.
It would be great if we could reduce the sources of errors down to zero for everyone. But unfortunately humans are very stochastic in nature, so we cannot do this. But then there has to be an acceptable source of measurement error (typical distribution) and an unacceptable source of measurement error (atypical distribution) and to actually measure based on the same criteria, you need to measure differently based on what you believe the error to be.
- Test theory is a very complex topic within psychology. But there is a lot of insight that you can gain into this based on psychological test theory.
One Problem is, that we first have to clearly define the construct that we want to measure with the test. That is not often clear and often underdefined. When designing a test, we also need to be clear about what external influences contribute to noise / error and which are created by the actual measurement. There never is a test that does not have a margin of error.
A simple / simplified example: When we measure IQ, we want to determine cognitive processing speed. So we need to have fixed time for the test. But people also may read the questions faster or slower. This is just a typical range, so when you look at actual IQ tests, they will not give a score (just the most likely score) but also a margin of error, and test theorists will be very unhappy if you don't take this margin of error seriously. Now take someone who is legally blind. That person will be far out of the margin of error of others. The margins of errors account for typical inter-personal and intra-personal (bad day, girlfriend broke up) etc occurrences. But this doesn't work here. So we try to fix this, and account for the new source of error differently, e.g. by giving more time.
So it highly depends on what you want to measure. If you are doing a test in CS, do you want to measure how well the student understood the material and how fast they can apply it? Or do you want to measure how fast the student could do an actual real-live coding task? Depending on what your answer is, you need a very different measurement strategy and you need to handle sources of error differently.
When looking at grades people usually account for these margins of errors intuitively. We don't just rely on grades when hiring, but also conduct interviews etc so we can get a clearer picture.
- Well, that is kind of what we do.
We look at the range of lengths that is typical for legs. And all these get to compete under typical conditions.
Now let's say someone has a leg length that is fairly outside of the typical range. Let's say someone has a leg length of zero. We let these athletes compete with each other as well with different conditions, but we don't really compare the results from the typical to the atypical group.
- Just letting the cars set the light would not work for some of the plans with autonomous cars.
Currently with cars you need green-red periods. But some researchers are considering scenarios, where in the future the cars just reserve the intersection for a few seconds and then pass through. That would be a lot of flickering between green and red.
There are scenarios and simulations where they show that we can get a lot more (fuel and time) efficient if we just let cars pass by each other in these reserved time windows. In many of these scenarios cars just go over the intersection at full speed during their reserved time windows.
NOTE: I am just reporting on what I know other researchers are currently investigating and proposing. I am not saying I think this is a good idea. At least I would consider this a security nightmare, because hackers could very easily have cars crashing into each other at full speed. Also I would be very very worried passing through an intersection behind an autonomous car and just trusting that this car not only reserved their window, but also reserved some additional time for me.
- Yes, it should definitely be in the 400 space of HTML error codes. As 400 -> "You are incorrect" while 500 -> "We messed up".
- These "maddeningly repetitive questions" are exactly the internal issues that are being talked about. If they ask "why not" just let them ask.
It's not your job as a parent to 1) make sure your children are happy all the time 2) defend your decisions against all attacks.
I found that when children say "why not" repeatedly, they are actually saying "I am unhappy and want to find an argument to reconsider your stance". If you signal them that this is actually something to argue about, e.g. by repeatedly answering these questions, they will just play the game you are offering them.
I found that it's actually a good approach to only directly answer "why not" the first time, and to just answer it the second time by "I understand that you are unhappy about my decision. I have already explained it and will not explain it again. If you need help dealing with your unhappiness I will be there for you."
A lot of the maddening part of these questions is most often the parent not being able to deal with the unhappiness of the child. Once you accept that unhappiness is a natural part of life 1) this will be easier for you 2) you will model much better for your child how to deal with unhappiness.
- There is some research that goes in the opposite direction. I.e. that especially in german the lack gender in the word for girls ("das Mädchen") is actually quite problematic and can lead to girls not thinking they really have a gender before they grow up. At least up to a certain age, where children learn to separate between grammatical gender and social gender or biological sex.
- Yes, this is very common. Autistic people can easily go into meltdown if they loose an object that they assign emotional states to.
In severe cases it can be sufficient if the object is slightly moved to trigger a meltdown, and there are reports that support those exact thinking patterns.
- Not at the same rates, but at higher rates than the general population and even stronger:
""" Together, our results indicate that object personification occurs commonly among autistic individuals, and perhaps more often (and later in life) than in the general population. """
This is well known for many autistic people. "I put this thing there, and now it has to stay at that position, because otherwise it will be very sad."
The surprising part is not that autistic people do have empathy for inanimate objects (this is so well known, it's even covered in some diagnostic tests), but rather to find further confirmation and compare it to the general population. Mostly because this is surprising, as in general autism is related to empathy disfunction, so it is surprising to observe empathy at higher rates (see below).
However, as many researchers have pointed out that is exactely what would be expected. Empathy disfunction is incorrectly interpreted by many as "lack of empathy". But empathy means understanding and representing the emotional state of another living creature. Assigning emotional states to inanimate objects is by definition an empathy disfunction, because you are mentally representing something that is not there in the real world. Same with over-empathy that is reported by some autistics. Since these are over-representing the emotional states of others, this is also a disfunction (i.e. a mismatch between observed subject and the representation).
So the article builds strongly on the false equivalence between empathy disfunction and lack of empathy.
- Interesting that you use some kind of schizophrenia axis.
There is actually some scientists that hypothesize schizophrenia and autism are exact opposites of each other. It's call the predictive coding hypothesis of autism.
In essence the predictive coding hypothesis assumes that large parts of our brain function like a modern video codec. Always predicting the next states and reducing information by only picking up on prediction errors that need to be encoded separatedly.
Under this hypothesis schizophrenia arises, if there is a very strong predictive coding and very little influence of the prediction errors. You hear voices out of noise, because your prediction mechanism tries to encode these noises as something sensible.
On the other hand in autism you have very little prediction and high external influence (i.e. the normal information reduction doesn't take place).
There are some studies that try to pick up the prediction vs. error components in simple cognitive tasks that support this idea.
- Both is true depending on who you ask. That distinction between "men care about objects while women care about people" can be researched at early childhood.
There are very vocal feminists who claim this is a direct sign of strong socialization beginning at early childhood and who claim this effect would completely disappear once we stop socializing gender roles in our society this way.
There are also very vocal feminists who claim that these are two different gendered perspectives and we should have both perspectives.
Yes, this might seem contradictory at first, but it really isn't as much. Feminism advocates for both woman who have grown up under the current socialization as well as new born children, and tries to find ways to do a more fair socialization.
- I would rather dismiss her point on the basis, that from my perspective this may be true for a small niche of academics that focus specifically on programming language formalisms.
When I studied programming language during my university time, this was really focused on formal approaches, so it is true there. But that is how this field of studies defines itself, and that should be considered their right.
Once you look outside of this narrow field, you can easily find a lot of projects and endeavors that cover exactly what she is requesting in that article.
* The rust compiler focuses a lot on more understandable error messages (a topic specifically covered) and even recommendations that make picking up the language easier.
* C++11 standardization also focused a lot on usability and how to improve hard to read error messages.
* Scratch is explicitly designed to look for alternative approaches to programming.
* Programming in other languages has been around for a long time.
In school we were taught a German version of Logo. I don't buy her argument that her language research was dismissed purely because it wasn't hard enough. We simply have anything we need to understand how we could do a programming language in another language. Replace a few lexer definition, and then re-define the whole stdlib in another language. There is simply nothing novel about this. I really hope her research on language covers a lot more than just this.
She also does a very bad bait-and-switch when she suddenly replaces the meaning of the word "hard" in the middle of the article. Initially she clearly used "hard" to refer to difficult, then later she suddenly switches hard in the sense of "hard" sciences, i.e. sciences based on formalisms and empirical research instead of discussions and opinions.
I agree with her a lot of research is missing from non-technical hard sciences (I would consider large parts of psychology a hard science, although it lives at the border of the two worlds). There is some research on the psychology of programming, but this is definitely under-researched. Also usability studies of programming languages are not well established.
In a lot of cases, however, I don't think this is actually something we can really do research on. I have a strong background in psychology, and I don't think we actually could study the impact of different paradigms. If you pick participants that already know programming, they will be highly socialized with the dominant paradigms. If you pick novices you will have to control what they learn over years until they become fluent in the studied paradigm. This isn't feasible and raises sever ethical concerns. Or you don't control it, make short time studies, in which case the results will just not carry any meaning.
Overall for me the article raises some really valid concerns about programming language research and CS in general, but I think she took a really bad turn in describing these as gender based issues. What I would see as the reason for these issues lies in completely different areas and are only very remotely related to gender.
- But you are assuming 0.3... is the representation of 1/3. We don't have to make this assumption, it's just the one we are usually taught. Math doesn't really break from making different assumptions, quite the opposite.
Let's make some different assumptions, not following high school math: When I divide 1 by 3, I always get a remainder. So it would just be as equally valid to introduce a mathematical object representing this remainder after I performed the infinite number of divisions. Then
1/3 = 0.3... + eps / 3
2/3 = 0.6... + 2eps / 3
3/3 = 0.9... + 3eps / 3
and since 0.9... = 1 - eps, we get 3/3 = 0.9... + eps = 1
It's all still sound (I haven't proven this, but so far I don't see any contradiction in my assumptions). And it comes out where 0.9... is not equal to 1. Just because I added a mathematical object that forces this to come out.
Edit: Yes, I am breaking a lot of other stuff (e.g. standard calculus) by introducing this new eps object. But that is not an indicator that this is "wrong", just different from high school math.
- I still think that the distinction is very important. With standard math (e.g. real numbers) we obviously have 0.9999... = 1 and this is actually very easy to prove using the assumptions that you are taught during high school math.
However, in higher math you are taught that all this is just based on certain assumptions and it is even possible to let go of these assumptions and replace them with different assumptions.
I think it is important to be clear about the assumptions one is making, and it is also important to have a common set of standard assumptions. Like high school math, which has its standard assumptions. But it is just as possible to make different assumptions and still be correct.
This kind of thinking has very important applications. We are all taught the angle sum in a triangle is 180 degrees. But again this is assuming (default assumption) euclidean geometry. And while this is sensible, because it makes things easy in day to day life, we find that euclidean geometry almost never applies in real life, it is just a good approximation. The surface of the earth, which requires a lot of geometry only follows this assumption approximately, and even space doesn't (theory of relativity). If we would have never challenged this assumption, then we would have never gotten to the point where we could have GPS.
It is easy to assume that someone is wrong, because they got a different result. But it is much harder to put yourself into someones shoes and figure out if their result is really wrong (i.e. it may contradict their own assumption or be non-sequitur) or if they are just using different assumptions. And to figure out what these assumptions are and what they entail.
For this assumption: Yes, you can construct systems where 0.9999... != 1, but then you also must use 1/3 != 0.33333... or you will end up contradicting yourself. In fact when you assume 1 = 0.999999... + eps, then you must likely also use 1/3 = 0.33333 - eps/3 to avoid contradicting yourself (I haven't proven the resulting axiom system is free of contradiction, this is left as an excercise to the reader).
- So are you saying these comments are marked as unsafe or are these comment part of the safe rust?
- I am Mark.
Well, not technically, but I know someone who is.
- In several areas DOS is still used for stuff that requires no other tasks run simultaneously. This can be used to achieve some kind of near realtime capabilities.
E.g. eyetrackers used in psychology studies or tests often still require DOS, because the companies providing these systems don't want to build software that has the same timing capabilities in a newer operating system.