Preferences

hzhou321
Joined 468 karma

  1. One is a boss's view, looking for an AI to replace his employees someday. I think that is a dead end. It is just getting better to become a sophisticate, increasingly impressive but won't work.

    One is the worker's view, looking at AI to be a powerful tool that can leverage one's productivity. I think that is looking promising.

    I don't really care for the chat bot to give me accurate sources. I care about an AI that can provide likely places to look for sources and I'll build the tool chain to lookup and verify the sources.

  2. > To illustrate, Rust macros are basically that. They have a substantially different syntax to normal Rust code that make it very difficult to quickly grok what is going on. It's a net negative IMO, not a positive.

    Yeah, more like a syntax extension than macro. But I am saying that you need both. Some time you need powerful macro ability to extend the language. Sometime you just need templating to achieve the expressiveness. With LISP, I get it that you are programming all the time, never templating, right? But I guess you only appreciate templating when you use your macro system as a general purpose system . The benefit of general purpose macro systems is you only learn one tool for all languages, rather than re-learn the individual wheels. And when you judge a language, you no longer bothered by its syntactic warts because you can always fix the expressive part with your macro-layer.

  3. M4 only do token-level macros or inline macros. M4 macros are identifier that can't be distinguished from underlying language. M4 macros does not have scopes. M4 does not have context level macros. M4 does not have full programmable ability to extend syntax.

    I can define any macro lisp can define, just not with LISP syntax. I do not have a full AST view of the code, due to the its generality that it does not marry to any specific underlying language. But I can have extensions that is tailored to specific language and do understand the syntax. For example, the C extension can check existing functions and inject C functions. MyDef always can query and obtain the entire view of the program, and it is up to the effort in writing extensions to which degree we want macro layer to be able to parse. Embedding a AST parser for a local code block is not that difficult.

    It's like the innerHTML thing, for me, I always find the text layer (as string) is more intuitive for me to manipulate than an AST tree. If needed, make an ad-hoc parser in Perl is often simple and sufficient, for me at least.

  4. In another word, it (to have runtime macro expansion) is a side effect, a compromise, a wart, rather than a design goal, right?
  5. > I think a general purpose macro language is only as useful as the average code-gen/templating language. It's just string manipulation, which can be harder to reason about than true metaprogramming ...

    I would like you to reconsider. Predicting a program output is hard. So in order to comprehend a macro programed as code, one need run the macro in their head to predict its output, then they need comprehend that output in order to understand the code. I think that is unreasonable expectation. That's why reasonable usage of meta-programming is close to templating where programmer can reason with the generated code directly from the template. For more higher-powered macros, I argue no one will be able to reason with two-layers at the same time. So what happens is for the programmer to put on his macro hat to comprehend the macro, then simply use a good "vacabulary" (macro name) to encode his comprehension. And when he put his application programming hat, he takes the macro by an ambiguous understanding, as a vocabulary, or some one may call it as a syntax extension. Because we need put on two hats at different time, we don't need homoiconicity to make the two hats to look the same.

  6. > That being said, I think homoiconicity is actually a useful feature, but runtime macro expansion is the dangerous part.

    Practically, why would you ever want a runtime macro expansion?

  7. LISP has been singing homoiconicity as its feature. I lately start to think homoiconicity is really a wart. The macros are a system to program the code, while the code is a system to program the application. They are two different cognition tasks and making them nearly indistinguishable is not ideal.

    LISP has a full-featured macro system, thus hands down beats many languages that only possess handicapped macro system or no macro system at all. It uses the same/similar language to achieve it is mere accidental. In fact, I think LISP is an under-powered programming language due to its crudeness. But it's unconstrained macro system allows it compensate the programming part to certain degree. As a result, it is not a popular language and it will never be, but it is sufficiently unique and also extremely simple that it will never die.

    What if, we have a standalone general-purpose macro system that can be used with any programming languages, with two syntax layer that programmers can put on different hat to work on either? That's essentially how I designed MyDef. MyDef supports two forms of macros. Inline macros are using `$(name:param)` syntax. Block macros are supported using `$call blockmacroname, params`. Both are syntactically simple to grasp and distinct from hosting languages that programmers can put on different hats to comprehend. The basic macros are just text substitution, but both inline macros and block macros can be extended with (currently my choice) Perl to achieve unconstrained goals. The extension layer can access the context before or after, can set up context for code within or outside, thus achieve what lisp can but using Perl. We can extend the macros using Python or any other language as well, but it is a matter of the extent to access the macro system internals.

    Inline macros are scoped, and block macros can define context. These are the two features that I find missing in most macros systems that I can't live without today. Here is an example:

        $(set:A=global scope)
        &call open_context
            print $(A)
        print $(A)
    
        subcode: open_context
            set-up-context
            $(set:A=inside context)
            BLOCK # placeholder for user code
            destroy-context
  8. The ban is not having an issue between China and US. US can do whatever to China, and only need balance potential retaliation. There is no moral debate between countries, just power play.

    The ban is an issue between US government/politicians and US people. There are some nominal moral contract between the government and people, and the ban need to be justified and satisfied by people's moral belief, such as open internet. Without sufficient moral justification, people's trust against the government is at risk.

  9. Since you are already familiar with other languages, I suggest just pick a tool that you use daily that is open source in C, and start hacking it.
  10. Some billionaires grow integrity at some point since every thing else is just money. Some billionaires never.
  11. > C specification says a program is ill-formed if any UB happens. So yes, the spec does say that compilers are allowed to assume UB doesn't happen.

    I disagree on the logic from "ill-formed" to "assume it doesn't happen".

    > I think you're conflating "unspecified behavior" and "undefined behavior" - the two have different meanings in the spec.

    I admit I don't differentiate those two words. I think they are just word-play.

  12. > Indeed. UB in C doesn't mean "and then the program goes off the rails", it means that the entire program execution was meaningless, and no part of the toolchain is obligated to give any guarantees whatsoever if the program is ever executed, from the very first instruction.

    This is the greatest sin modern compiler folks committed to abuse C. C as the language never says the compiler can change the code arbitrarily due to an UB statement. It is undefined. Most UB code in C, while not fully defined, has an obvious part of semantics that every one understands. For example, an integer overflow, while not defined on what should be the final value, it is understood that it is an operation of updating a value. It is definitely not, e.g., an assertion on the operand because UB can't happen.

    Think about our natural language, which is full of undefined sentences. For example, "I'll lasso the moon for you". A compiler, which is a listener's brain, may not fully understand the sentence and it is perfectly fine to ignore the sentence. But if we interpret an undefined sentence as a license to misinterpret the entire conversation, then no one would dare to speak.

    As computing goes beyond arithmetic and the program grows in complexity, I personally believe some amount of fuzziness is the key. This current narrow view from the compiler folks (and somehow gets accepted at large) is really, IMO, a setback in the computing evolution.

  13. Nice! If the robots can sustain on solar energy, even better.
  14. How silly is this.
  15. What prevents linux to achieve the same bandwidth?
  16. It's not about the amount of time kids takes up in your life. It is about how kids changes your life forever. Your priority in life changes, then your life changes. I guess when kids leave us, we need relearn the life.
  17. The `infix` syntax is missing from the major items. Without infix syntax, all languages are just variations of LISP -- I guess that was all the article is about.
  18. For those who are interested, checkout MyDef - https://github.com/hzhou/MyDef
  19. Yes, I agree! In fact, the break through we needed is to view programming as a human cognitive operation and embrace the text, and treat manipulating text as a main component of coding. For example, I want to code

         foreach item in list_A
             do_something(item)
    
    This text is the native code in the programmer's mind, and we should allow programmer to just do so. Then, in a second layer, the programmer should code up the transformer and translate that to the actual programming language, adding incidental complexity such as specific syntax and internal language representations, so that the lower level compiler can verify and consume and feedback.

    The transformer part is super hard if we rely on automatic tools, which is just another version of a compiler. It is super tedious if we rely on human manual work, which is just how today programmers do. But if we view the transformer part as part of programming, where programmer employs tools to mold their program, then it makes sense. The programmer will be able to program the tools to avoid the tedious part but still with full flexibility to mold anyway they desire. It is still programming, but in a meta frame where text is the target.

  20. The author's view is very relatable (to me). I am going to bookmark it since it verbalize my feeling so fittingly. I am not sure I am autistic, but I have to be to some extent.

    I can see where the essay goes partial. The author focuses on explaining self, but neglect in analyzing why others are having issues understanding the author. And I think I understand the reason -- the author is simply (much) more interested in solving the engineering problems than the political problems. The author probably understand the importance of politics, but simply not interested.

    I can relate to that. I don't even think that is a problem. If you are happy focusing on technical problems and want to avoid the political side, so be it. However, I am concerned that the author mentions burnout. If you are having risk of burning out, then apparently there is an issue and the author is stubbornly avoid addressing the issue.

    The answer may not be -- you need learn and play politics. It could be -- you need learn to balance yourself. It depends on individuals, but refusing to address it is wrong.

  21. When one reads a reply that is out of his expectation, and when his expectations was for granted, then that reply will be perceived as negative or hostile, right? But the reply may just be out of his expectation.

    This is common when a newbie discuss with an expert and when the newbie assumes certain understanding that are in fact incorrect or should be redefined. When the expert try to explain or correct that basis, in a way of explaining why the question was a wrong question, it may often be perceived as lack of respect -- one deserves to ask an question without the question being attacked, right?

    The "asshole" is uncalled for.

  22. You mean -- where to find experts that can tolerate newbie asking off-base questions and showing off their bottle half full? If you just find people with "a deep love of learning", you guys end up just bouncing shallow questions and answers off each other, right? I am not saying that is not fun, but I am not sure that is really "a deep love of learning".

    From an expert point of view, where is motivation of tolerating adult "curiosities"? I can easily tolerate a 5-year old who genuinely curious and assumes no base of understanding, which I can enjoy building my explanations from ground up. You get the satisfaction of filling an empty cup and and occasionally, the naive 5-year-old may seriously challenge your fundamental understanding and you yourself learn something. Not all 5-year-olds are like that, and extremely rare to find such adults.

    I think the people you are looking for, with "a deep love of learning", is every where. For example, who love reading popular books on subjects that they are not familiar. That is why there are so many popular non-fiction books. But, are you sure that you are actually seeking each other out?

    IMO, if you are truly with "a deep love of learning", study text books and take MIT open courses, and then read scientific papers. Then you will find those experts -- they are always listed in the references with contact associations.

  23. > Ruby is clever, it can be beautiful, but I've never seen a good codebase using it grow well without enforcing strictly opinionated ways of writing it to ensure maintainability.

    I believe every good maintainable software needs an enforcing strictly opinionated ways of writing it to ensure maintainability.

    Language need be general to be useful. Any language can create utterly unmaintainable code. If you shift "opinionated" into the language, it is less easy to write unmaintainable code, but it is also more difficult to write fitting maintainable code because that "opinionated" opinion may not best fit (and usually can't best fit) a specific application, a specific team, and a specific set of experience. On the other hand, if we shift "opinionated" to the programmer, the language inevitably need add more meta-programming ability or simply more ways to do the same/similar thing. The code can easily become unmaintainable under less experienced programmers who are not opinionated and aren't disciplined.

    The current programming culture expect programmers to be commodities, thus favors opinionated languages and idiomatic code. I can easily imagine a shift in culture that favors experienced and opinionated and disciplined programmers, where different set of languages will be favored.

    Today's software, once grow to certain size, are all quite unmaintainable.

  24. I think it is a crowd dumbing effect. Since hundreds of engineers sharing the mono-repo, no one can or care to make the decision or is able to push the decision for alternate. Even when every one is complaining, it is still far from every one agreeing on the alternate. Crowd settle at the lowest denominator.
  25. Think about programing layers: A->B->C->D->...->Compiler->binary->output, where A is the end programmer, and B, C, D are the libraries and modules. I think what the article describes is not much different from issues in any complicated software systems, as quite a few comments also pointed out. However, when the language become more expressive and compiler become more clever, more of the issues will be rooted from the the compiler->binary link. I think this is inevitable with the current model of how software works, which I can simplify as: A -> [super compiler] -> output

    The middle part is the concatenation of all the middle links and handles the complexity necessary to translate from language to output. As we trying to make A less complex, the middle [super compiler] will get more complex, and more buggy because of the complexity.

    I believe the fundamental issue with this model is the lack of feedback. A feedback on output, and A makes change (in A) until output get correct. With the big complex and opaque middle, for one, we can't get full feedback on output -- that is the correctness issue. The more complex the middle gets, the less coverage the testing can achieve. For two, even with clear feedback -- a bug -- A cannot easily fix it. The logic from A to output is no longer understandable.

    I believe the solution is to abandon the pursuit of magic solution of A -> [super compiler] -> output but to focus on how to get feedback from every link in A->B->C->D->...->compiler->binary->output

    For one this give A a path to approach and handle complexity. A can choose to check on B or C or ... directly on output, depending on A's understanding and experience. For the least, A can point fingers correctly.

    For two, this provides a path to evolve the design. The initial design on which handles which or how much complexity is no longer crucial. Each link, from A, to B, to C, ... to compiler can adjust and shift the complexity up and down, and eventually settle down to a system that fits the problem and team.

    I believe this is how natural language works. Initially A tells B to "get an apple" and they directly feedback on the end result of what apple B gets to A and may alter layer of A by expanding into more details until it gets the right result. Then, some of the details will be handled by B and A can feed back on B's intermediate response for behavior. As the world gets more complex, the complexity at the layer A stays finite but we added middle layers. Usually, A only need feedback on its immediate link (B) and the final output, but B needs to be able to feedback on its next immediate link, and if A is capable, A may choose to cut-out the one of his middle man.

  26. Perl started with scripting, so many design philosophy are centered around scripting. Python, I think, is started with the goal of building applications. My first impression when I learned Python (around 2000) was the OOP nature of Python. Building classes and objects were least in my mind when scripting.

    A few examples:

    Perl (flexible/optional function call interface)

        print "hello world\n";
    
    Python (the insistence of function call parenthesis)

        print("hello world")
    
    Perl (direct string substitution thanks to sigils)

        print "Hello $name\n";
    
    Python (a few formats, keep searching for the perfect ones)

        print("Hello %s\n" % name);
    
    Perl (a core set of shell like patterns)

        if (!-d $dir) {
            mkdir $dir;
        }
        chdir $dir;
        foreach my $f (glob("*.txt")) {
            ...
        }
    
    Python

        import os
        import glob
        # google for usages
    
    Perl (good lexical scopes)

        if ($do_A) {
            my $x = "A";
            work_with($x);
        } else {
            my $x = "B";
            work_with($b);
        }
    
    Python

        if do_A:
            x = "something" # did I changed x for somewhere else?
            ...
    
    Perl (a set of defaults balacing the succinctness and readability)

        my %opts;
        while (<In>) {
            if (/(\w+):\s*(.+)/) {
                $opts{$1} = $2;
            }
        }
    
    Python

        import ...
        ...
            m = re.match(r'...', line, re_flags)
            if m:
                 opts[m.group(1)] = m.group(2)
    
    
    Perl's philosophy is "There is more than one way to do it", pick the way that makes most sense to you.

    Python's philosophy is "There should be one-- and preferably only one --obvious way to do it", but obviousness is subjective to some, and for the rest, you need learn/search/train the "one" way.

  27. > Lmao Python has been around since 1991 only three years after Perl appeared. How's it a fad?

    If everyone start wearing pink this year, that's a fad. A fad has nothing to do with when the stuff was invented. It only has to do with people choosing it from the dominant reason being other people chooses it.

    > Sucess of python is a testament to the difference a simple and beginner-friendly syntax can make as well as the "batteries-included" paradigm.

    So why Python didn't succeed when Perl dominated? As you said, Python was around for quite a while then. "Simple", "beginner-friendly", "batteries-included" are all subjective terms that none of them I think is true compared to Perl. That mostly comes from one's background and culture, not worth to debate.

    > the fact that Perl code is compatible with that 20 years before makes no difference as they have such bad readability

    Writing Perl the way one writes shell scripts is what gets Perl's write-only rep. Do not write scripts the way we write shell scripts. Write scripts the way we write code -- That is my motivation to replace shell scripts with Perl.

    There is no easy way to write readable shell scripts. Perl has all the language features that enables writing readable code. Using bad practitioner as argument for bad language is a bad logic. But if all you saw were bad practitioners (or more likely today, one haven't seen real practitioners by hears about the bad rep and see some bad relics), then it's hard to convince you. Perl has formatter. Perl has strict mode (that should be default). Perl has best practices. The people still using Perl today knows this. People who claim Perl is unreadable are getting that from the 90s.

  28. Python never succeeded in scripting. Shell still dominates scripting today. Perl is rooted in scripting.
  29. Perl is really cursed by the past glory. People still trying to revive Perl against modern fad, for example, Python. Fad are powered by steam of live Eco-systems, which Perl once had, but no longer have. There is no real reason why Perl couldn't succeed where Python does other than the path of history. And there is no real reason to chase history today.

    Rather than looking for killer app or adding modern features, such as the effort of Perl 6, now Raku, Perl should shed features and focus on stability (which it is fighting hard to retain), on ubiquity (which it had and are losing ground), and core performance (which it is slowly degrading due to features). I think it should try to get a core set into POSIX standard. The core Perl can be like POSIX shell, while a distribution always can distribute a fancier, but always compatible Perl.

    Perl should replace shell scripting, period. If there is any reason shell scripts is still preferred, then that should be the TOP focus for Perl steering council to address.

    It really frustrates me to see people today, me included, are still trying to manage basic software engineering in shell scripts. It really amuses me today to listen to pastors on how to write good shell scripts. It really saddens me today to watch efforts of inventing a better shell for scripting.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal