This is just one case the general rule of ambiguity.
At one of my jobs, a product manager came up with the idea of categorizing explicitly delineating everybody's selection criteria into a "normalized" form to allow for aggregating statistics.
I tried to point out up front what a fool's errand in this was. There is way too much ambiguity in the language.
I was overruled and the company then spent probably 7 to 15 million dollars chasing this ridiculous El Dorado dream.
Eventually, after 3 years of wandering in the wilderness of normalized ontologies, they gave up and decided all NLP is bad.
This decision came out about 2 months before the release of chatGPT.
At one of my jobs, a product manager came up with the idea of categorizing explicitly delineating everybody's selection criteria into a "normalized" form to allow for aggregating statistics.
I tried to point out up front what a fool's errand in this was. There is way too much ambiguity in the language.
I was overruled and the company then spent probably 7 to 15 million dollars chasing this ridiculous El Dorado dream. Eventually, after 3 years of wandering in the wilderness of normalized ontologies, they gave up and decided all NLP is bad.
This decision came out about 2 months before the release of chatGPT.