Preferences

raiph
Joined 389 karma

  1. > use of UTF-8 character for keywords [concerns me]

    What do you mean? All the keywords in the standard language are ASCII. And even if you meant some other part of the standard language, they're (almost) all ASCII too.

    (The most notable exceptions are use of superscript 0 thru 9 for powers -- eg `2¹⁶-1 == 65535` -- and the `«` and `»` characters, which are part of the Latin-1 aka "8 bit ASCII" character set, are aliases for the (7 bit) ASCII `<<` and `>>` tokens. I understand these exceptions may concern you, but can assure you they're not really a problem in practice.)

    > The problem with sigils is that they compose poorly when casting

    Are you thinking Raku sigils are like sigils in other languages, eg Perl or Javascript or PHP?

    From my perspective one of the several _strengths_ of Raku's sigils is that they combine succinct compile time type constraints and composition.

    Just type `@` (reads as "at") to denote a compile time enforced type constraint for the `Positional` interface, an abstract/generic datatype for any iterable integer indexed collection.

    So, if you have an array of points, Raku will happily let you store it in a variable named, say, `points-array`, but naming it `@points` means Raku will compile time enforce binding of that name to an integer indexed collection and visually reflect that that is so for any human glancing at the code.

    As for "casting", if you want to treat `@points` as a Single Item ("a single array of points") then just write `$@points` -- the `$` reads as Single Item -- `S` over `I`.

    (Technically speaking that's not time consuming casting, but just stripping off an indirection, the optimal optimization of that scenario, but I am guessing this is semantically the kind of thing you meant.)

    > (refs, counts)

    Again, are you thinking that Raku's sigils are like other languages'? (They're not.)

    > and do not generalize to other types.

    Again, are you thinking that Raku's sigils are like other language's sigils? They are not.

    Raku's `$` sigil is the generic Single Item interface for any datatype. (It can be parameterized with a data structure's typed structure.)

    The `&` sigil is the generic Single Item interface for any function type. (It can be parameterized with a function's type signature.)

    The `@` sigil covers any integer indexed collection. (It can be parameterized with the collection's fixed length for a fixed size array, or shape for a multidimensional structure. To parameterize a nested heterogeneous data structure's type signature, use `$` instead.)

    The `%` sigil covers any name indexed collection. (It can be parameterized with the collection's key/value types. To parameterize a nested heterogeneous data structure's type signature, use `$` instead.)

    > Plus, they seem to encourage the language designers to implement semantic that is "context aware" which would have been another billion dollars mistake if perl had become more popular.

    Why are you mentioning Perl in a subthread about Raku? Are you aware the language was renamed precisely because so many people were completely misunderstanding the nature of Raku?

    > In other words, that's unnecessary complexity bringing the attention to a poor type system. A bad idea that deserves to die, in my opinion.

    If you're thinking of Perl's type system and applying what you know of that to Raku's, that's like thinking Python's type system is like Haskell's. They are very different.

  2. I hear you were coming from the angle of being useful. In a sense that's what matters most, and I love that you have that spirit.

    If Wikipedia has deadnamed Raku with grace then that might be a model to follow, but in general it's far from helpful unless it's done carefully. There's a reason why the community embarked on the somewhat painful multi decade process of renaming it. To try clarify my understanding I'll summarize it here.

    Because of the original name for Raku, people assumed for a long time (long after it became problematically misleading) that it shared semantics, or syntax, or compiler tech, or libraries, or something technical like that, with some other programming language(s).

    This was partly because Raku did indeed take some inspiration philosophically and/or technically from some existing languages (traces of a half dozen or so are evident), and partly because Raku and its compiler tech features the ability to use Python code and compilers, and C code and compilers, and Perl code and compilers, and so on, as if they were native Raku code/compilers.

    But Raku was never C, or Python, or Perl, and the old name is unfortunately a form of deadnaming -- which is to say, something that is seldom helpful, especially if some commentary like this comment is not included.

    At least, that's how I experience it.

    That said, regardless of my view, I love your impulse of being helpful, which is why I've written this -- and I hope it does help any readers.

  3. Right. The choice is presumably between:

    Bad: A site being usable for a significant amount of time per day, but also unusable for a significant amount of time per day, and the ratio between usable and unusable time per day significantly deteriorating.

    Worse: A site being usable for a significant amount of time per day, but also unusable for a significant amount of time per day, and the ratio between usable and unusable time per day significantly deteriorating _significantly faster_.

    Clearly, Anubis is at best an interim measure. The interim period might not be significant.

    But it might be. That is presumably the point of Anubis.

    That said, the only time I've heard of Anubis being tried was when Perl's MetaCPAN became ever more unusable over the summer. [0]

    Unfortunately Anubis and Fastly fought, and Fastly won. [1]

    ----

    [0] https://www.perl.com/article/metacpan-traffic-crisis/

    [1] https://www.reddit.com/r/perl/comments/1mbzrjo/metacpans_tra...

  4. > Perl 5 ... hasn't changed for 25+ years.

    A major new version of Perl ships regularly. A few weeks ago the latest major new version shipped. From the 2025 changes document:

    > Perl 5.42.0 represents approximately 13 months of development since Perl 5.40.0 and contains approximately 280,000 lines of changes across 1,600 files from 65 authors.

    Skipping back 5 major new versions (to 2020):

    > Perl 5.32.0 represents approximately 13 months of development since Perl 5.30.0 and contains approximately 220,000 lines of changes across 1,800 files from 89 authors.

    2015:

    > Perl 5.22.0 represents approximately 12 months of development since Perl 5.20.0 and contains approximately 590,000 lines of changes across 2,400 files from 94 authors.

    2010:

    > Perl 5.16.0 represents approximately 12 months of development since Perl 5.14.0 and contains approximately 590,000 lines of changes across 2,500 files from 139 authors.

    There's been well over 10 million lines of code changed in just the core Perl codebase over the last 25 years reflecting huge changes in Perl.

    ----

    Perl 6 ... isn't backward compatible

    Raku runs around 80% of CPAN (Perl modules), including ones that use XS (poking into the guts of the Perl 5 binary) without requiring any change whatsoever.

    (The remaining 20% are so Perl 5 specific as to be meaningless in Raku. For example, source filters which convert Perl 5 code into different Perl 5 code.)

    ----

    But you are right about one thing; no one you know cares about backwards compatibility, otherwise you'd know the difference between what you think you know, and what is actually true.

  5. > dual licensing means that you don’t actually believe in software freedoms

    If both licenses are accepted by essentially everyone as inherently fully free software licenses, and they don't contradict each other, then what's not to like?

    Consider the appropriate example here, which isn't anything to do with AGPL (which many people do NOT accept as a free software license), but rather the AL2 / GPL pairing which was the evolution of the original pairing created by the inventor of dual licensing (Larry Wall).

    What's your beef with Artistic License 2.0 / GPL dual licensing?

  6. > I would [guess?] that [Raku(do)] is using hyper operators and internal concurrency to "cheat".

    You missed a word. I've guessed it was the word "guess". :)

    I haven't checked Rakudo's code but I'm pretty sure any performance optimizations of your code were not related to using hyperoperators or internal concurrency.

    Here's two things I can think of that may be relevant:

    * Rakudo tries to inline (calls to) small routines such as `* + *`. Wikipedia's page on inlining claims that, when a compiler for a low level language (like Rust) succeeds in inlining code written in that language it tends to speed the code up something like ten percent or a few tens of percent. In a high level language (like Raku) it can result in something like a doubling or, in extreme cases, a ten fold speed up. The difference is precisely because low level languages tend to compile to fast code anyway. So while this may explain why Raku(do) is faster than CPython, it can't explain your conclusion that Rust is half as fast as Raku. (I think you almost certainly made a mistake, but let's move on!)

    * In your Raku code you've used `...`. That means all but the highest number on the command line are computed for free, because sequences in Raku default to lazy processing, and lazy processing defaults to use of caching of already generated sequence values. So a single run passed `10 20 30 40` on the command line would call the `* + *` lambda just 40 times instead of 100 (10+20+30+40) times. That's roughly a doubling of speed right there.

    So if Rakudo is doing a really good job of codegen for the fibonacci code, and you removed the startup overhead from your Raku timings, then perhaps, maybe, Raku(do) really is "beating" Rust because of the `...` caching effect.

    I still find that very hard to believe but it would certainly be worth trying to have someone reasonably expert at benchmarking trying to repeat and confirm your (remarkable!) result.

    > At this point in its evolution, there is still much work to be done on code optimisation.

    Understatement!

    It took over a decade for production JVMs and JS engines to stop being called slow, another decade to start being called fast (but not as fast as C), and another decade to be considered surprisingly fast.

    Rakudo's first production release came less than a decade ago. So I think that, for now, a reasonable near term performance goal (I'd say "by the end of this decade") is to arrive at the point where people stop calling Raku slow (except in comparison to C).

    > Let me try a translation:

    Let me have a go too. :) But I'll rewrite the code:

        sub MAIN                         #= Print fibonacci values.
            (*@argv);                    #= eg 10 20 prints 55 6765
    
        print .[ @argv ]                 # print fibonacci values.
          given
            0, 1, 1, 2, 3, 5,            # First fibonacci 
        values.
            sub fib ( $m, $n )           # Fibonacci generator
                    { $m + $n } ... Inf  # to compute the rest.**
  7. > CPython 9.72 - 22.10 secs > PyPy 1.65 secs > Node 1.76 secs > Rust 0.25 secs > Raku 0.12 secs

    Hi librasteve. :)

    FWIW I find it really hard to believe your bench marking was reliable / repeatable. Mistakes happen, and I think mistakes must have happened in this case.

    I'll write more about this in another reply to another of your comments.

  8. er, what? (⊙_)
  9. Hi Steve. Please include https://github.com/beercss/beercss in your exploration.
  10. > Sadly another language with concurrency support that fails to learn the lessons from occam.

    Weren't the top two lessons learned from occam about indeterminacy?:

    1. If you ignore indeterminacy, you haven't really tackled concurrency, so you become irrelevant. CSP 1 ignored it, so occam 1 and 2 were designed to ignore it. While they became irrelevant for several reasons, I've long thought it was the fatal mistake of ignoring indeterminacy that made the demise of the original occam series inevitable.

    2. If you tackle indeterminacy, but not its deeper consequences, you remain irrelevant. occam-π, which has added indeterminacy constructs as an after thought, has the problem that it's tackled it from the outside in.

    During the 2019 concurrency talk panel that brought together three concurrency giants -- the late Joe Armstrong (Erlang), Sir Tony Hoare (CSP / occam), and the late Carl Hewitt (Actor Model) -- Tony said:

    > The test for capturing the essence of concurrency is that you can use the same language for design of hardware and software, because the interface between those will become fluid. We've got to have a hierarchical design philosophy in which you can program each individual ten nanoseconds at the same time as you program over a 10 year time. And sequentiality and concurrency enter in to both those scales. So bridging the scale of granularity and of time and space is what every application has to do. The language can help them do that and that's a real criterion to designing the language.

    Both Joe and Carl instantly agreed about what Tony said but also instantly disagreed about the central role of indeterminacy, and one could be forgiven for thinking Tony still hasn't learned the lesson of the mistake he made with CSP 1 and occam 1.

    Erlang's "let it crash" concept distilled Joe's fundamental understanding of the nature of the physical universe, and how to write correct code given the inescapable uncertainty principle aka inescapable indeterminacy.

    The Actor Model, which is a simple, purely mathematical model of purely physical computation, ironically the right theoretical grounding if you apply "Occam's Razor" to concurrent computation in our physical universe, contrasts sharply with the more "airy fairy" process calculi, which abstract processes as if one can truly ignore that, for such calculi to be useful in reality, the processes they describe must occur in reality -- and then indeterminacy becomes the key characteristic to confront, not an afterthought.

    At least, that's my understanding.

    I recall loving occam when I first read about it in the mid 1980s, partly because I was writing BCPL for a living at the time, and occam's syntax was based on BCPLs, but also because I was getting interested in concurrency, and fell in love with the Transputer, which was created by Inmos, a Bristol UK outfit, and I lived in the UK having spent a couple years living a few miles from Bristol.

    But one can't deny reality, and the laws of physics, and the growing complexity of software, and the consequences of those two fundamentals, so eventually I concluded the Actor Model was going to outlive CSP and occam. I still think that, but am ever open to being persuaded of the error of my ways of thinking...

  11. I've just found the key exchanges that arrived at "produce" by using the IRC log search[1] and then, er, scanning backwards to where the exchange began[2]:

    > TimToady: is it okay to rename the 'reduce' builtin to 'fold' and add one for 'scan'? My understanding is that 'reduce' is the general term for them both.

    [1] https://irclogs.raku.org/search.html?query=produce+reduce&ty...

    [2] https://irclogs.raku.org/perl6/2006-05-15.html#17:10-0001

  12. I recall Larry saying the key visual aspect is the `\` (I don't recall him mentioning the `/`) because `[\` visually echoes the typical shape of results:

        .say for [\**] 2,3,4
    
        4                          #           4
        81                         #      3 ** 4
        2417851639229258349412352  # 2 ** 3 ** 4
  13. And to finish that last bit, if an argument of a call is not a `Date` or `Str`:

    * The compiler may reject the code at compile time for failing to type check; and, if it doesn't:

    * The compiler will reject the code at run time for failing to type check unless it's either a `Date`, or a `Str` that successfully coerces to a `Date`.

  14. > I have worked on multiple 500K+ line Python projects.

    There were already several 1M+ Perl LoC code bases by the end of the 1990s, as Perl use exploded at places like Amazon.

    One key driver of Raku's design was making it easy to write large, clean, readable code bases, and easy to enforce coding policies if and when a dev team leader chose to.

    (Of course, as with any PL, you can write messy unreadable code -- and you can even write articles and record videos that make it seem like a given PL is only for Gremlins. But it's so easy to write clean readable code, why wouldn't you do that instead?)

    > type hint ... is never enough if it isn't guaranteed by a compiler or at runtime

    Indeed. Type hints are a band aid.

    Raku doesn't do hints. Raku does static first gradual typing.[1]

    The "static first" means static typing is the foundation.

    The "gradual typing" means you don't have to explicitly specify types, but if you do types, they're always enforced.

    Enforcement is at compile time if the compiler's static analysis is good enough, and otherwise at run time at the latest.

    For example, in the Gremlins article only one example uses a static type (`Int`). But user code can and routinely does include types, and they are enforced. For example:

        sub double(Int $x) { $x * 2 }
        double '42'
    
    results in a compile time error:

        SORRY! Error while compiling ...
    
        Calling double(Str) will never work with declared signature (Int $x)
            (Int $x)
    
    > total mess due to weak typing

    I know what you mean, but to be usefully pedantic, the CS meaning of "weak typing" can be fully consistent with excellent typing discipline. Let me demonstrate with Raku:

        sub date-range(Date $from, Date $to) { $from..$to }
    
    The subroutine (function) declaration is strongly typed. So this code...

        say date-range '2000-01-01', '2023-08-21';
    
    ...fails at compile time.

    But it passes and works at run time if the function declaration is changed to:

        sub date-range(Date() $from, Date() $to) { $from..$to }
    
    (I've changed the type constraints from `Date` to `Date()`.)

    The changed declaration works because I've changed the function to be weakly typed, and the `Date` type happens to declare an ISO 8601 date string format coercion.

    But what if you wanted to insist that the argument is of a particular type, not one that just so happens to have a `Date` coercion? You can tighten things up...

        sub date-range(Date(Str) $from, Date(Str) $to) { $from..$to }
    
    ...and now the argument has to already be a `Date` (or sub-class thereof), or, if not, a `Str` (Raku's standard boxed string type), or sub-class thereof, that successfully coerces according to `Date`'s ISO 8601 coercion for `Str`s.
  15. I've tried and failed to find out what happened to Curt in the past. I suspect they died or were otherwise incapacitated 2 years ago because they suddenly stopped posting online.
  16. > Much later, in the second half of the 90s, I remember the four of us carrying an IBM HDD -- I think it was your normal 5.25" drive but it needed four people because it was mounted on a vibration dampening base ...

    Continuing the shades of Monty Python theme[1]:

    I remember one of my first few nights being in charge of the new IBM kit (I was a "computer operator" back then, in 1980), leaning back in the fancy new chair at the desk with its fancy "virtual" teletypes (a couple "terminals" displaying the status of the OS with a CICS system), and showing off to an "underling" by swinging a long plastic slide rule or something stupid like that (I no longer recall), and me accidentally banging it on the desk. Right "near" a recessed big red button. Or perhaps "on" the button? As I snapped my head around to look at the button and begin to understand what I may have just done I heard an ominous series of whirring and clicking sounds coming from the cpu box, right near where there was an 8" diskette drive that wasn't supposed to be doing anything while the OS was running (it was just for starting the OS). Then I looked at the console... Uhoh. They didn't fire me but it took months before they decided to let me be "in charge" again with someone else actually hovering over me...

    Fast forward to when I was a coder (BCPL) in a small software startup, during the second half of the 80s, presumably 10 years before you were carrying your 5.25" drive monster, I vividly recall someone bringing a 700MB hard drive back from a local computer store. It cost an astonishingly paltry 700 quid or thereabouts. A pound a MB!

    [1] https://www.youtube.com/watch?v=VKHFZBUTA4k

  17. Shades of the Monty Python sketch here, but the following is true...

    4MB?!?

    My first encounter with IBM kit was a, er, darn I'm not sure cuz I'm getting old, but I think it was a 4300? Not big iron in some senses, but still with a box that was something like 6-8 feet long iirc and definitely several feet wide and high. (And a bank of about 6-8 tape decks, each as tall as me, and two disk units, each the size of a washing machine, and so on.)

    Its RAM? A massive 1 MB.

    That IBM kit was the heart of the super new expensive upgrade in 1980 that cost something like 5-10 million pounds iirc to build, including a brand new building to house it and a team of programmers.

    The older setup, which is where I was until its last days, was an ICL system that was expanded at the end of its life to a whopping 48KB -- yes, KB -- of RAM.

    And that kit ran all the systems, internal (payroll, accounting, etc., etc.) and external (sales etc.) for the largest car dealership in the UK.

    128MB? 4MB? Even 1MB? That was an unimaginably insanely large amount of RAM!

    (Yes, it was very weird to be working with this physically enormous setup, and dealing with keeping it all cool enough not to halt for a half hour or so, through super human efforts when the A/C broke down, when the likes of PETs, Sinclair Z80s, and Acorn Atoms were a thing...)

  18. Are you the champion Raku needs in this respect? Or can you at least help?

    ----

    There's been discussion for a decade of the need for bundling up a single executable that includes everything. Not merely the modules a script needs but also the Rakudo compiler itself.

    (One can even imagine it one day being combined with Cosmopolitan[1] for a radically portable bootstrapping solution. But that's getting way ahead of things -- let's hope we get there later this decade.)

    A decade ago there were a bunch of things that needed to be done to get to the end goal of a practical solution for a single executable that bundles dependencies and just works.

    Since then almost all of the "todo list" for that has been done.

    But as yet no one has gotten to the end goal of making creation of a single executable a practical thing. (And then making it become a standard part of Rakudo.)

    Imo, even if all you did was start another discussion of where we're at regarding reaching that end goal, you'd be helping things along. Maybe not here but, say, reddit (r/rakulang).

    [1] https://justine.lol/cosmopolitan/

  19. > why can't objects use the already built-in symbols the language support?

    In Raku they can. But one can also introduce new ones. More generally, it's nice Raku makes it easy for a dev to write a module that lets other devs write code and get nice output like this:

        use Physics::Measure :ALL;
    
        # Define a distance and a time
        d = 42m; say d;     # 42 m (Length)
        t = 10s; say t;     # 10 s (Time)
    
        # Calculate speed and acceleration
        u = d / t; say u;   # 4.2 m/s (Speed)
        a = u / t; say a;   # 0.42 m/s^2 (Acceleration)
    
        # As a special treat it allows working with measurement errors:
        d = 10m ± 1;
        t = 8s ± 2;
        say d / t           # 1.25m/s ±0.4
    
    (With thanks to SteveP for the module and niner for the above example code.)
  20. I get how you (and perhaps others) might think it's as you say. But I can tell you're not saying what you've said based on knowing it to be true but guessing it to be so. And while your guess isn't a surprising one given natural assumptions due to the names "Perl", "Perl 5", "Perl 6", and "Perl 7", it doesn't correspond to what has actually happened.

    Anyone keeping up with the latest release of the Perl language has had near zero breakage for decades. (Indeed Perl has a well deserved reputation for having an outstanding track record in this regard compared to almost all other mainstream PLs.) I personally see every likelihood Perl 7 will extend that track record, though of course my crystal ball prognostications are necessarily based purely on what I see.

    No one using P6 or Raku had a huge breaking change from Perl 5. No one using P6/Raku will have another one going to Perl 7.

    If you presume Raku and Perl are different languages you'll get the essence of what has actually happened so far, and seems likely to be more or less true for the rest of this decade at least.

  21. Couple suggestions:

    * Ponylang. Pure Actor model. Static typing to enforce safe concurrent semantics in almost all respects. Plus ORCA GC. Upshot: high performance, and type theory grounded guarantees of no deadlocks, no livelocks, and no data races. Chief downsides are that MS Research has hired its founder, and it's a fledgling language/community.

    * Elixir. Kinda Actor modelish. Dynamic typing, solid community. Someone else has posted about it.

    * Raku. Someone else has posted about it. Chief downsides are that it's got slow single core performance and small community. Upsides include ease of use for multi core code. `start` schedules a lambda or function or statement on a virtual thread. cf Go's `go`. No `async`.

  22. In the Rakudo compiler for Raku that I just tried its "chars" count using the default EGC counting is 2.
  23. It's not a fork any more than C# is a fork of C.

    It's not a rename for a breaking version tree any more than a C# compiler is of GCC.

    It is backwards compatible. The first strong demonstration of this was using the Catalyst framework, a major Perl package, in Raku programs, 6 years ago.

    It's not remotely like Python 3 vs Python 2. Python 3 is a tweak of Python 2 that utterly failed to confront hard problems that desperately needed -- and still need -- to be confronted. For example, Raku sits alongside Swift and Elixir as the only two mainstream PLs yet to properly address character processing in the Unicode era. In contrast, Python 3's doc doesn't even mention graphemes. Rakudo has no GIL; Python is stuck with CPython's. And so on for a long list of truly huge PL problems.

    Raku is an entirely different programming language. (This is so despite Raku(do) being able to support backwards compatibility via the same mechanism that allows it to use Python modules as if they were Raku modules. And C libraries, and Ruby ones, and so on.)

  24. > Perhaps I misunderstood the idea of pronoun variables.

    Their point is the same as the one in human language. You can't make them up on the spot -- they already exist, with predetermined reference logic that we learn as part of learning a language. All you can do is mention them. This means they often reduce both the reading and comprehension load in comparison with inventing a new name and binding it to some referent.

    For example, in this sentence:

    > I can't speak for Perl's general usefulness, but pronoun variables are just not terribly useful, because you can make them up on the spot.

    Who's "I"? And "you"? What's "them"? These are all obvious. Compare that with:

    > I = the person writing this. This = the writing you are reading. You = the person reading this. I can't speak for Perl's general usefulness, but pronoun variables are just not terribly useful, because you... [You = someone writing code] can make them [Them = pronoun variables (though, as noted, you can't make them)] up on the spot.

    Consider what it would take to write code displaying the first 10 numbers in the fibonacci sequence in some PL.

    Here it is in Raku using pronouns:

        say .[^10] given 0, 1, *+* ... Inf # (0 1 1 2 3 5 8 13 21 34)
    
    I grant that you have to learn this aspect of Raku to be able to read and write the above code. But it only takes a few seconds to learn that:

    * `.foo` means applying `foo` to "it".

    * `[^10]` means up to the 10th element. That's got nothing to do with pronouns, but whatever.

    * The `*` in `*+*` is the pronoun "whatever" and, when used in combination with an unary or binary operator forms a lambda for that operation. So `*+*` reads as "whatever plus whatever" and represents a two argument lambda that adds its arguments.

    * `...` is Raku's "sequence" operator, which uses the preceding function/lambda as the generator (which in turn uses the preceding N arguments per the generator's arity). Again, nothing to do with pronouns, and yet more you have to learn, but that's how PLs are.

    It will no doubt look awfully weird, almost as weird as, say, Chinese (presuming you don't know Chinese). But that (it looking weird) is something completely distinct from whether it (using it) is extremely pleasant once you just accept it.

    With a sufficiently open mind it (accepting using a "whatever" pronoun) can take literally a few seconds. (Thus kids tend to find this notion of "whatever" in Raku easy to grok, whereas adults sometimes struggle.)

  25. I'm still looking forward to a single binary for each of the following:

    * NQP

    * Rakudo

    * Rakudo programs

    Building on https://yakshavingcream.blogspot.com/2019/08/summer-in-revie...

  26. > The fact that they went out of their way to break python 2 unicode when running on python 3 was just totally nuts. Especially after making such a big deal about unicode!

    Imo it's infinitely worse than that.

    The big deal about Unicode is its nature, as defined in the "Summary Narrative" from 1991[0]. To wit:

    > The Unicode character encoding derives its name from three main goals:

    * universal (addressing the needs of world languages)

    * uniform (fixed-width codes for efficient access), and

    * unique (bit sequence has only one interpretation into character codes)

    The Unicode folk realized that it would take decades to shift developers worldwide to doing that properly, so they adopted a three stage plan for software (eg the string types of programming languages) to get from where things were, to where they needed to be:

    * Stage #1: Character = byte

    * Stage #2: Character = code point

    * Stage #3: Character = what a user thinks of as a character[1]

    Python 1 was a Stage #1 language -- Character = byte -- like most others of its time.

    In Python 2 there were tweaks to try move toward Stage #2 -- Character = code point, again, like most other PLs of its time.

    In Python 3, they dictated a full switch to Stage #2 --- Character = code point. That was an unnecessarily painful break relative to Python 2. But -- and this is what really matters -- they entirely ignored Stage #3, which is the whole point of Unicode in the final analysis.

    [0] https://www.unicode.org/history/summary.html

    [1] https://unicode.org/glossary/#grapheme

  27. Right, yes, for continued fractions / convergent series, floats are almost certainly going to be a complete disaster whereas if only integer division is involved, as is the case with this series, then arbitrary precision rationals are going to be exact, and often sufficiently performant for most such series for results up to about 15 decimal digits or so.

    Damian demonstrates this difference between using floats and arbitrary precision rationals when he shows code for computing Bernoulli numbers in his "On The Shoulders Of Giants" presentation.[0]

    (Btw, if you care about programming and appreciate a combination of total technical and geek mastery with standup comic delivery, the whole thing is like watching a great movie, whether you watch it from the "start" at the 5 minute mark, or skip forward to the 45 minute mark for the 15 minute Quantum Computing For Beginners section.)

    [0] https://www.youtube.com/watch?v=Nq2HkAYbG5o#t=11m09s

  28. Are you saying that because (typically) arbitrary precision rationals are too slow, and floating point yields errors, or for some other reason?

    (Raku supports arbitrary precision rationals, and they're normalized to minimize the denominator size, and remain pretty performant until the denominator exceeds 64 bits, so I'm thinking they might do OK for correctly computing transcendentals to maybe 15 decimal digits or so.)

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal