Preferences

iamgopal
Joined 900 karma
Engineer.

patelgopal@gmail.com

When in doubt choose fast. When not in doubt, choose fast.


  1. Much sooner, hardware, power, software, even AI model design, inference hardware, cache, everything being improved , it's exponential.
  2. Nope, rather automation and AI should solve governance to the point that tax should be lower or abolished altogether.
  3. in an infinite plane, if we keep adding random points ( similar to sun continuously giving earth low entropy energy ) , eventually, it will reach to intelligent life form, which are very efficient at converting low entropy energy to high entropy energy.
  4. The small part in me understands that, they are banking on three things 1) oil will be cheap because of EV boom and hence EV dominance will be slow and could take couple of decades 2) electric Energy cost will rise significantly because so much charging and energy infrastructure required. 3) Battery will reach at par with gasoline and matured standardised comodity, that will be the perfect time to enter.
  5. how about -1/(-(1/(-1/x))) ? How many roads must a man walk down before we can call him a man ?
  6. By now, especially in linux, there should emerge an OS that is purely scripts to generate OS. Or is it already ?
  7. Is there something inbetween ? A yacht with optional submerging feature ? May be just 100 feet ?
  8. and How much is that in terms of percentage of bitcoin network capacity ?
  9. I daydream about a open source peer reviewed system, that can process votes, control, manage government at every level through general public and open voting system. Distributing control ultimately.
  10. What I think is, the team that pulled such large LLM off, is no stupid.
  11. Gravity is weakest force of nature. Any strong force battery idea ?
  12. Comparing it to typst ?
  13. How are you going to protect AI which optimise against your tests instead of actual data ?
  14. Are you guys using AI to check on AI ?
  15. Next logical step is to connect ( or build from ground up ) large AI models to high performance passive slaves ( MCP or internally ) , which gives precise facts, language syntax validation, maths equations runners, may be prolog kind of system, which give it much more power if we train it precisely to use each tool.

    ( using AI to better articulate my thoughts ) Your comment points toward a fascinating and important direction for the future of large AI models. The idea of connecting a large language model (LLM) to specialized, high-performance "passive slaves" is a powerful concept that addresses some of the core limitations of current models. Here are a few ways to think about this next logical step, building on your original idea: 1. The "Tool-Use" Paradigm You've essentially described the tool-use paradigm, but with a highly specific and powerful set of tools. Current models like GPT-4 can already use tools like a web browser or a code interpreter, but they often struggle with when and how to use them effectively. Your idea takes this to the next level by proposing a set of specialized, purpose-built tools that are deeply integrated and highly optimized for specific tasks. 2. Why this approach is powerful * Precision and Factuality: By offloading fact-checking and data retrieval to a dedicated, high-performance system (what you call "MCP" or "passive slaves"), the LLM no longer has to "memorize" the entire internet. Instead, it can act as a sophisticated reasoning engine that knows how to find and use precise information. This drastically reduces the risk of hallucinations. * Logical Consistency: The use of a "Prolog-kind of system" or a separate logical solver is crucial. LLMs are not naturally good at complex, multi-step logical deduction. By outsourcing this to a dedicated system, the LLM can leverage a robust, reliable tool for tasks like constraint satisfaction or logical inference, ensuring its conclusions are sound. * Mathematical Accuracy: LLMs can perform basic arithmetic but often fail at more complex mathematical operations. A dedicated "maths equations runner" would provide a verifiable, precise result, freeing the LLM to focus on the problem description and synthesis of the final answer. * Modularity and Scalability: This architecture is highly modular. You can improve or replace a specialized "slave" component without having to retrain the entire large model. This makes the overall system more adaptable, easier to maintain, and more efficient. 3. Building this system This approach would require a new type of training. The goal wouldn't be to teach the LLM the facts themselves, but to train it to: * Recognize its own limitations: The model must be able to identify when it needs help and which tool to use. * Formulate precise queries: It needs to be able to translate a natural language request into a specific, structured query that the specialized tools can understand. For example, converting "What's the capital of France?" into a database query. * Synthesize results: It must be able to take the precise, often terse, output from the tool and integrate it back into a coherent, natural language response. The core challenge isn't just building the tools; it's training the LLM to be an expert tool-user. Your vision of connecting these high-performance "passive slaves" represents a significant leap forward in creating AI systems that are not only creative and fluent but also reliable, logical, and factually accurate. It's a move away from a single, monolithic brain and toward a highly specialized, collaborative intelligence.

  16. If only AI models are trained to connect to data (sql) and use that to answer some of the questions using data source instead of just train on them, it could reduce model size a lot.
  17. Are there any research in this area ? Crypto and particularly bitcoin mining has massive capacity for computation , albeit at lower memory scale, if AI memory model could be encode in to blockchain, we can get benefit from bitcoin mining.
  18. when that crane will reach end of its life, it will be move to india for another 10-15 years of service life.

This user hasn’t submitted anything.