Preferences

ozgung
Joined 654 karma
https://www.linkedin.com/in/ozgungenc/ https://ozgungenc.com/

  1. What is a good way of connecting Obsidian vault to AI?
  2. I agree, classic innovator's dilemma. It's a new business enterprise, has nothing to do with Meta's existing business or products. They can't be under the same roof and mush have independent goals.
  3. Great post and I think this extends to machine learning names, although not that severe. Maybe it all started with Adam. When I say “I used Adam for optimization” this means I used a random opaque thing for optimization. If I say “I used an ADAptive Moment estimation based optimizer” it becomes more transparent. Using human names or random nouns has been a trend. Lora, Sora, Dora, Bert, Bart, Robert, Roberta, Dall-e, Dino, Sam… With varying capitalization for each letter. Even the Transformer. What does it transform exactly? But it gets worse. Here is a list of architectures that may replace Transformers [0]: Linformer, Longformer, Reformer, Performer, Griffin, BigBird, Mamba, Jamba... What’s going on?

    [0]https://huggingface.co/blog/ProCreations/transformers-are-ge...

  4. I think doing your research using search engine/AI/books and paraphrasing your findings is always valuable. And you should cite your resources when you do so, eg. “ChatGPT says that…”

    > 1. If I wanted to run a web search, I would have done so

    Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.

    2. People behave as if they believe AI results are authoritative, which they are not

    AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.

    Any strict rule/ban would be very premature and shortsighted at this point.

  5. For Flash vs iPhone case, it was indeed mostly politics. People were using Flash and other plugins in websites because there were no other alternative, say to add a video player or an animation. iPhone was released in 2007 and app store in 2008. iPhone and iPad did not support then popular Flash in their browsers. Web experience was limited and broken. HTML5 was first announced in 2008 but would be under development for many years. Not standardized yet and browser support was limited. Web apps were not a thing without Flash. Only alternative for the users was the App Store, the ultimate walled garden. There were native apps for everything, even for the simplest things. Flash ecosystem was the biggest competitor and threat for the App Store at that moment. Finally in 2010 Steve Jobs addressed the Flash issue and openly declared they will never support it. iPhone users stopped complaining and in 2011 Adobe stopped the development of mobile plugins.

    Adobe was in a unique position to dominate the apps era, but they failed spectacularly. They could have implemented payment/monetization options for their ecosystem, to build their own walled garden. Plugins were slow but this was mostly due to hardware at the time. This changed rapidly in the following years, but without control of the hardware, they had already lost the market.

  6. Tom and Jerry's friendship makes more sense now.
  7. That seems like a valid problem that was also mentioned in the podcast. 50 copies of Ilya, Dave or Einstein will have diminishing returns. I think the proposed solution is ongoing training and making them individuals. MS Dave will be a different individual than Dave.gov. But then why don't we just train humans in the first place.
  8. > The LLM architectures we have now have reached their full potential already.

    How do we know that?

  9. Another definition: A modern GPU a general purpose computer that can make parallelized and efficient computations. It's optimized to run limited number of operations but on large number of data points.

    This happens to be useful both for graphics (same "program" running on on huge number of pixels/vertices) and neural networks (same neural operations on huge number of inputs/activations)

  10. As a Turkish person I used to agree with you. Not anymore. My people don't want to be associated with an ugly bird and I respect that. Also we don't want an exception. We're ready to use Bharat, Deutschland or any other name in our language if those nations want that. Same for the city names. It's about respecting those countries and their people.

    Fun fact: India is Hindistan in Turkish which literally means Land of Turkeys. Maybe we should really change. Bharat means spice which is a better name.

  11. > What most people fail to realise is that in between each token being generated, black magic is happening in between the transformer layers.

    Thank you by saying that. I think most people have an incomplete mental model for how LLMs work. And it's very misleading for understanding what they really do and can achieve. "Next token prediction" is done only at the output layer. It's not what really happens internally. The secret sauce is at the hidden layers of a very deep neural network. There are no words or tokens inside the network. A transformer is not the simple token estimator that most people imagine.

  12. This is exactly what I mean by anthropocentric thinking. Plants talk plant things and cows talk about cow issues. Maybe there are alien cows in some planet with larger brains and can do advanced physics in their moo language. Or some giant network of alien fungi discussing about their existential crisis. Maybe ants talk about ant politics by moving their antennae. Maybe they vote and make decisions. Or bees talk about elaborate honey economics by modulating their buzz. Or maybe plants tell bees the best time for picking pollens by changing their colors and smell.

    Words, after all are just arbitrary ink shapes on paper. Or vibrations in air. Not fundamentally different than any other signal. Meaning is added only by the human brain.

  13. This is one of the last bastions of anthropocentric thinking. I hope this will change in this century. I believe even plants are capable of communication. Everything that changes over time or space can be a signal. And most organisms can generate or detect signals. Which means they do communicate. The term “language” has traditionally been defined from an anthropocentric perspective. Like many other definitions about the intellect (consciousness, reasoning etc.).

    That’s like a bird saying planes can’t fly because they don’t flap their wings.

    LLMs use human language mainly because they need to communicate with humans. Their inputs and outputs are human language. But in between, they don’t think in human language.

  14. > I just want to understand why the Turks decided to change this letter, and this letter only

    Because Turkish uses a phonetic alphabet suited for Turkish sounds, based on latin letters. There are 8 vovels come in two subsets:

    AIOU and EİÖÜ.

    When you pair them with zip(), pairs are phonetically related sounds but totally different letters at the same time. Turkish also uses suffixes for everything, and vowels in these suffixes sometimes change between these two subgroups.

    This design lets me write any word uniquely and almost correctly using the Turkish alphabet.

    Dis dizayn lets mi rayt ani vörd yüniğkli end olmost koreğtkli yuzing dı törkiş alfabet.

    Ö is the dotted version of O. İ is the dotted version of I. Related but different. Their lower case versions are logically (not by historical convention): öoiı. So we didn’t just wanted to change I, and only I. We just added dots. Since there are no Oö pair in any language our OoÖö vovels didn’t get the same attention. Same for our Ğğ and Şş.

    I hope this answers the question.

  15. We created our own letters and our own rules. In 1928, long before code pages and computers.

    The assumption that letters come in universal pairs is wrong. That assumption is the bug. You can’t assume that capitalization rules must be the same for every language implementing a specific alphabet. Those rules may change for every language. They do.

    And not just capitalization rules. Auto complete, for instance, should respect the language as well. You can’t “correct” a French word to an English word. Localization is not optional when dealing with text.

  16. Nope, we decided to do it the correct and logical way for our alphabet. Some glyphs are either dotted or dotless. So, we have Iı, İi, Oo, Öö, Uu, Üü, Cc, Çç, Ss and Şş. You see the Ii pair is actually the odd one in the series.

    Also, we don't have serifs in our I. It's just a straight line. So, it's not even related to your Ii pair in English. You can't dictate how we write our straight lines, can you.

    The root cause of the problem is in the implementation and standardization of the computer systems. Computers are originally designed only for English alphabet in mind. And patched to support other languages over time, poorly. Computers should obey the language rules, not the other way around.

  17. > Claude Code, Codex CLI etc can effectively do anything that a human could do by typing commands into a computer.

    One criticism on current generation of AI is that they have no real world experience. Well, they have enormous amount of digital world experience. That, actually, has more economical value.

  18. So they are actually Ad Search engines.
  19. I’ve just completed reading all 8 posters on the site. For some reason I find them easier to understand than any written content, code or math. They are all intuitive. It was fun and engaging to solve their notation and meaning they want to convey. The one with AVL trees was the most useful to me.

This user hasn’t submitted anything.