- alexanderskatesTheir video player is also garbage, especially on mobile. I'll have to open and close a video multiple times to get it to play, the quality will take a nosedive midway through and just stay that way for the rest of the video, and they take forever to load.
- And what of exclamation or question marks?
- Blocking cookies for twitter seems to work for me... sometimes I need to refresh the page a few times however.
- I had this same issue in Redshift and ended up populating a table with values 1 to the maximum number of commas found (e.g. using max(regexp_count(...)) or something), then cross joining on the table with the csv column and calling split_part on the corresponding column and index (with the index coming from the numbers table). The cross join ensures that you index every value of the csv column.
- What does "X for white people" look like though? The only reason that "X for black people" exists is that black people are a minority group and aren't sufficiently catered to by X, which is already more or less "X for white people" by default, at least in the US and much of Europe. As such, any product that that markets itself explicitly to white people (again, only referencing the US and Europe here) is much more likely to have less socially acceptable intent behind it
"X for white people" makes more sense in a population where caucasians are the minority, for instance in China.
- Given the cost of developing a single drug can be in the billions of dollars, what financial incentive is there for companies to spend so much on R&D and clinical development if there is no commercial exclusivity rights at the end of the pipeline? You could nationalise drug development but that is a lot of risk to burden the taxpayer with.
- At least in the UK, sandwiches are the most that pharma reps can use to bribe doctors with... Which is not to say they aren't effective (you'll get butts in chairs at least, no guarantee they will pay any attention to you though).
- I think an important distinction to make is your use of the word "language", and how we think of language as it concerns human minds, and as it concerns GPT-3.
In our heads, language is a combination of words and concepts, and knowledge can be encoded by making connections between concepts, not simply words. If there is no concept or idea backing up the words, it can hardly be called knowledge. Consider the case of the man who did not speak French, yet memorised a French dictionary, and subsequently went on to win a Scrabble competition. Just because he knows the words, would you say he knows the language?
A language model such as GPT-3 operates only on words, not concepts. It can make connections between words on the basis of statistical correlations, but has no capacity for encoding concepts, and therefore cannot "know" anything.
- The "handwriting" data for this model is basically the coordinates of a pen. The length of the string representation of the text is very different from the length of the coordinate representation of the text, therefore the model "learns" a window corresponding to when it is drawing the current letter, and when to start the next letter. For these letters, as the model doesn't learn how long this window should be, nor how to transition from it to the next letter, it gets stuck and outputs nonsense.
- The problem is that the prior appears to be placed over the school, rather than the individual -- ie if your school had a low proportion of high achievers in previous years, students are finding their grades marked down, almost regardless of their performance. This results in particularly high grade disparity between independent and state schools. So it is not so much the case that good students get good marks, bad students get bad marks, but rather good schools get good marks, "bad" schools get bad marks.