Preferences

Karliss
Joined 1,350 karma

  1. Partial zip shouldn't be totally useless and a good unzip tool should be able to repair such partial downloads. In addition to catalog at end zip also have local headers before each file entry. So unless you are dealing with maliciously crafted zip file or zip file combined with something else, parsing it from start should produce identical result. Some zip parsers even default to sequential parsing behavior.

    This redundant information has lead to multiple vulnerabilities over the years. As having redundant information means that a maliciously crafted zip file with conflicting headers can have 2 different interpretations when processed by 2 different parsers.

  2. More like HTML and getting different browsers to render pixel perfectly identical result (which they don't) including text layout and shaping. Where different browser don't mean just Chrome, Firefox, Safari but also also IE6 and CLI based browsers like Lynx.

    PDFs at least usually embed the used subset of fonts and contain explicit placement of each glyph. Which is also why editing or parsing text in PDFs is problematic. Although it also has many variations of Standard and countless Adobe exclusive extensions.

    Even when you have exactly the same font text shaping is tricky. And with SVGs lack of ability to embed fonts, files which unintentionally reference system font or a generic font aren't uncommon. And when you don't have the same font, it's very likely that any carefully placed text on top of diagram will be more or less misplaced, badly wrap or even copletely disappear due to lack of space. Because there is 0 consistency between the metrics across different fonts.

    The situation with specification is also not great. Just SVG 1.1 defines certain official subsets, but in practice many software pick whatever is more convenient for them.

    SVG 2.0 specification has been in limbo for years although seems like recently the relevant working group has resumed discussions. Browser vendors are pushing towards synchronizing certain aspects of it with HTML adjacent standards which would make fully supporting it outside browsers even more problematic. It's not just polishing little details many major parts that were in earlier drafts are getting removed, reworked or put on backlog.

    There are features which are impractical to implement or you don't want to implement outside major web browsers that have proper sandboxing system (and even that's not enough once uploads get involved) like CSS, Javascript, external resource access across different security contexts.

    There are multiple different parties involved with different priorities and different threshold for what features are sane to include:

    - SVG as scalable image format for icons and other UI elements in (non browser based) GUI frameworks -> anything more complicated than colored shapes/strokes can problematic

    - SVG as document format for Desktop vector graphic editors (mostly Inkscape) -> the users expect feature parity with other software like Adobe Illustrator or Affinity designer

    - SVG in Browsers -> get certain parts of SVG features for free by treating it like weird variation of HTML because they already have CSS and Javascript functionality

    - SVG as 2d vector format for CAD and CNC use cases (including vinyl cutters, laser cutters, engravers ...) -> rarely support anything beyond shapes of basic paths

    Beside the obviously problematic features like CSS, Javascript and animations, stuff like raster filter effects, clipping, text rendering, and certain resource references are also inconsistently supported.

    From Inkscape unless you explicitly export as plain 1.1 compatible SVG you will likely get an SVG with some cherry picked SVG2 features and a bunch of Inkscape specific annotations. It tries to implement any extra features in standard compatible way so that in theory if you ignore all the inkscape namespaced properties you would loose some of editing functionality but you would still get the same result. In practice same of SVG renderers can't even do that and the specification for SVG2 not being finalized doesn't help. And if you export as 1.1 plain SVG some features either lack good backwards compatibility converters or they are implemented as JavaScript making files incompatible with anything except browsers including Inkscape itself.

    Just recently Gnome announced working on new SVG render. But everything points that they are planning to implement only the things they need for the icons they draw themselves and official Adwaita theme and nothing more.

    And that's not even considering the madness of full XML specification/feature set itself. Certain parts of it just asking for security problems. At least in recent years some XML parsers have started to have safer defaults disabling or not supporting that nonsense. But when you encounter an SVG with such XML whose fault is it? SVG renderer for intentionally not enabling insane XML features or the person who hand crafted the SVG using them.

  3. Industrial cooler manufacturers and DC PR teams have their ways to greenwash the truth.

    "40% of data centers are using evaporative cooling" doesn't mean that other 60% are fully closed loop water to air coolers or what would be called "dry cooling systems" by the manufacturers. The other 60% could be "adiabatic coolers" or "hybrid coolers" or if data center is close to large body of water/water heat exchangers, where 2/3 of those still depend on evaporating water, but the manufacturers would put them in separate category from evaporative coolers.

    Just took a looked at offering of one of the industrial cooler manufacturers. They had only 1 dry cooler design, compared to a dozen more or less evaporative ones. And even that one was advertised as having "post install bolt-on adiabatic kit option". Which feels like a cheat to allow during initial project and build claim that you are green by using only dry coolers, but after the press releases are done, grant money collected and things are starting to operate at full capacity, attach sprinklers to keep the energy costs lower.

  4. Often it's less about learning from the bugfix itself but the journey. Learning how various pieces of software operate and fit together, learning the tools you tried for investigating and debugging the problem.
  5. One of the reasons you mostly saw point operations and very few rectangle operations is because quadtrees aren't great for range operations.

    Quadtrees might look like natural generalization of binary trees, but some things that work very efficiently in binary trees don't work for naive generalization of binary trees into quadtrees. For a full binary tree with W leaves any segment can be described using log(W) nodes. Almost every use of binary tree more interesting than maintaining sorted list of numbers depends on this property. What happens in 2d with Quadtree, how many of quadtree nodes are necessary to exactly describe arbitrary rectangular area? Turn's out you get O(W+H) nodes, not log(N), not log(W)*log(H).

    If you stored a rectangle in smallest node that completely contains that avoids the W+H problem during insertion operations, but during read operations you may end up with situation where all the information is stored at the root. This is no better having no quadtree at all and storing everything in plain list that has no special order. Such worse case scenario can be created very easily if every rectangle contains centre of area described by quadtree, then all rectangles will be placed at root.

    Dynamically subdividing and or allocating tree nodes on demand isn't anything particularity novel. If anything the cases where you can use fixed sized static trees outside textbook examples is somewhat a minority. Not saying there are no such cases there are enough of them, but large fraction of practical systems need to be capable of scaling for arbitrary amount of data which get's added and removed over the time and total size is not known ahead of time and the systems need to be able to deal arbitrary user data that has maliciously crafted worst case distribution. Every B-tree will split leaves into smaller nodes when necessary. Almost every self balancing binary tree can be considered of doing automatic division just with threshold of no more than 1 item per node. For 2d many uses cases of k-d tree will also subdivide on demand.

  6. Golly with hashlife algorithm is quite amazing. You can simulate and visualize quite a bit of it.

    Summary: In the end I was able to go through full period with the memory limit set to 35GB of memory. Most of the time all the action happens in 1+3x2 straight lines with different angles no more than 100 cells wide each. There were multiple distinct phases. Some one could definitely make an interesting visualization zooming in one the distinct regions at different stages of progression. General shape is horizontal line, <- shaped arrow, arrow with kite head, arrow with 2 nested kites, 2 giant nested kites, kite, arrow, horizontal line.

    At first I was able to simulate first 15*10^9 generations quite quickly. And you could see some of initial stages. During first 2e9 generations it was using only 500MB of memory, somewhere between 2e9-4e9 it started to slow down. After bumping the memory it limit to 16G it was able to spedup again until ~15e9.

    Initially it looks like 3 strings xxx. First one the shortest, second slightly longer and third even longer. Pattern xxx oscilates between horizontal and vertical giving stable form to store information.

    Shortest string starts to get consumed from right to left.At some point it emits 2 gliders diagonally to the up/left and down left. After a while there are 2 vertical spaceships which collide with 2 diagonal gliders. After a bit more it starts to emit stream of gliders to the up right direction with overall shape being like arrow. At the back of arrows the gliders start to build a structure for next stage. The constructed structure starts creating bigger stream in the up right direction which in turn starts emiting stream down right back to the horizontal line. Once it meets original line it creates a new line towards first diagonal at more gradual angle. Thus an arrow like shape with tip consisting of 2 nested kites keeps expanding and consuming the line of oscilating xxx. By the generation 2e9 it has processed first 2 smallest of 3 sequences. At ~30e9 it reaches end of third line and inner kite starts to disappear outer kite keeps expanding. 37e9 Inner kite has fully disappeared.

    At this point I further bumped RAM so that I can inspect zoomed in look at 1x speed. Now and probably previously what looked like sharp tip of arrow actually is more complicated machinery receiving in stream of gliders processing them and then emitting towards front in a way that reconstructs line of xxx. I am guessing at this point it is reconstructing initial line of xxx.

    At ~44e9 first segment of line was reconstructed and machinery started to get torn down? It started to rebuild something near the outer edge of arrow front. And shortly creates new stream of spaceships along the outside of arrow. Most of the time structure consisted of 7+2 similar parallel streams. Some slowdown at ~66e9 probably transition to new phase. Front of kite detaches from central line, outer corner of kite also starts to tear separate and tear down. 88e9 back of kite has fully disappeared leaving only arrow. 95e9 central line start to shrink shorter. 105e9 central line has returned to sequence of xxx. The sides of arrow are still there. 117e9 sides arrow break in half and erase from middle. 133e9 back to single line and start from beginning.

  7. Does anyone have good examples of this actually happening for end user software (like Ghostty is) and where in the long term proprietary fork won? Most of the recent variations of this that come to my mind are related to cloud infrastructure. Stuff where you have serious business customers.

    And in some of those cases GPL wasn't enough to prevent it. Niche end user utilities, where original is available for free have little room for monetization. And in many cases existing users are already choosing the open source option despite the existence of commercial solutions, or where it's too niche for commercial solutions to exist.

    Only thing that comes to my mind is VScode with all the AI craze. But that doesn't quite fit the pattern neither is the Microsoft underdog, nor it's clear that any of AI based editors derived from VScode will survive by themselves long term.

    There are also occasional grifters trying to sell open source software with little long term impact.

  8. It's a chicken and egg problem. Lack of ARM PCs due to software support, lack of software support due to negligible market share.

    Same argument can be applied to Linux. Why not just compile the software for Linux. Not that the most companies couldn't do it, it's just not worth the hassle for 1-3% of userbase. Situation with Linux also demonstrates that it's not enough to have just the OS + few dozen games/software for which hardware company sponsored ports, not even support for 10 or 30% of software is enough. You need a support for 50-80% of software for people to consider moving. Single program is enough reason for people to reject the idea of moving to new platform.

    Only way to achieve that is when a large company takes the risk and invests in both, build a modern hardware and also builds an emulation layer to avoid the complete lack of software. Emulator makes the platform barely usable as daily driver for some users. With more users it makes sense for developers to port the software resulting in positive feedback loop. But you need to reach a minimum threshold for it to happen.

    Compilation for ARM isn't the biggest issue by itself. You also need to get all the vendors of third party libraries you use to port them first. Which in turn might depend on binary blobs from someone else again. Historically backwards compatibility has been a lot more relevant on windows, but that's also a big weakness for migration to new architecture. A lot more third party binary blobs for which the developers of final software don't have the source code maybe somewhere down the dependency tree not at the top. A lot more users using ancient versions of software. Also more likely that there developers sitting on old versions of Visual Studio compared macOS.

    If you compare the situation with how Apple silicon migration happened. * Releasing single macBook model with new CPU is much bigger fraction of mac hardware market share compared to releasing single Windows laptop with ARM cpu.

    * Apple had already trained both the developers and users to update more frequently. Want to publish in Apple Appstore your software need to be compiled with at least XCode version X, targeting SDK version Y. Plenty of other changes which forced most developers to rebuild their apps and users to update so that their Apps work without requiring workarounds or not stand out (Gatekeeper and code signing, code notarization, various UI style and guideline changes)

    * XCode unlike Visual Studio is available for free, there is less friction migrating to new XCode versions.

    * More frequent incremental macOS updates compared to major Windows versions.

    * At the time of initial launch large fraction of macOS software worked with the help of Rosetta, and significant fraction received native port over the next 1-2 years. It was quickly clear that all future mackBooks will be ARM.

    * There are developers making macOS exclusive software for which the selling point is that it's macOS native using native macOS UI frameworks and following macOS conventions. Such developers are a lot more likely to quickly recompile their software for the latest version of macOS and mac computers or make whatever changes necessary to fit in. There is almost no Windows software whose main selling point is that it is Windows native.

    * Apple users had little choice. There was maybe 1 generation of new Intel based Apple computers in parallel with ARM based ones. There are no other manufactuers making Apple computers with x86 CPUs.

  9. That's not Intel syntax that's more or less ARM assembly syntax as used by ARM documentation. Intel vs AT&T discussion is primarily relevant only for x86 and x86_64 assembly.

    If you look at GAS manual https://ftp.gnu.org/old-gnu/Manuals/gas-2.9.1/html_chapter/a... almost every other architecture has architecture specific syntax notes, in many cases for something as trivial comments. If they couldn't even decide on single symbols for comments, there is no hope for everything else.

    ARM isn't the only architecture where GAS uses similar syntax as developers of corresponding CPU arch. They are not doing the same for X86 due to historical choices inherited from Unix software ecosystem and thus AT&T. If you play around on Godbolt with compilers for different architectures it seems like x86 and use AT&T syntax is the exception, there are a few other which use similar syntax but it's a minority.

    Why not use same syntax for all architectures? I don't really know all the historical reasoning but I have a few guesses and each arch probably has it's own historic baggage. Being consistent with manufacturer docs and rest of ecosystem has the obvious benefits for the ones who need to read it. Assembly is architecture specific by definition so being consistent across different architectures has little value. GAS is consistent with GCC output. Did GCC added support for some architectures early with the with help of manufacturers assembler and only later in GAS? A lot of custom syntax quirks which don't easily fit into Intel/AT&T model and are related to various addressing modes used by different architectures. For example ARM has register postincrement/preincrement and the 0 cost shifts, arm doesn't have the subregister acess like x86 (RAX/EAX/AX/AH/AL) and non word access is more or less limited to load/store instructions unlike x86 where it can show up in more places. You would need to invent quite a few extensions for AT&T syntax for it to be used by all the non x86 architectures, or you could just use the syntax made by developer of architecture.

  10. It's less about ensuring perfect layout as it is about avoiding almost guaranteed terrible layout. Unless your filesystem is fully fragmented already it won't intentionally shuffle and split big files without a good reason.

    Single large file is still more likely to be mostly sequential compared to 10000 tiny files. With large amount of individual files the file system is more likely to opportunistically use the small files for filling previously left holes. Individual files more or less guarantee that you will have to do multiple syscalls per each file and to open and read it, also potentially more amount of indirection and jumping around on the OS side to read the metadata of each individual file. Individual files also increases chance of accidentally introducing random seeks due to mismatch between the order updater writes files, the way file system orders things and the order in which level description files list and reads files.

  11. Using the font itself and setting line spacing to 1 instead of whatever Github uses makes it slightly easier to understand.
  12. Do you mean as an optical mask for photo step of the process or directly as resist for chemical etching skipping the photo part?

    I have done both some home photochemical PCB etching and some vinyl cutting but not that specific combination.

    As photo mask it makes little sense in most cases. Just buying a transparency which can be used in an inkjet printer will likely be faster, easier and produce better resolution. These used to be widely available due to use in overhead projectors in schools and offices, but still shouldn't be too hard to get due to use when screen printing custom t-shirts.

    As a direct physical resist it makes a bit more sense since it would allow skipping one chemical bath and the photo transfer process. I have seen some people online having very good results of it for decorative etches with moderate size details on thicker metal parts and glass. But I am somewhat skeptical about it withstanding full depth cutting of very fine grids with high pressure circulation like the one demonstrated in Applied Science video. It will likely come down to etching depth (is it a metal sheet 0.2-1mm or the copper layer on PCB 0.035mm or decorative surface etch), how aggressive your liquid circulation is, type of vinyl and it's adhesive (ones designed for outdoor use might be more resistant to liquid for longer), size of details. With concentrated enough etching liquid allowing fast etches, mild agitation and wide enough lines (>1mm) the vinyl should hold up.

    In the worst case if adhesion during etching turns out to be a problem it should still be possible to use the vinyl as stencil while painting on whatever paint to be used as resist. This should still be much faster than doing photo step.

    One of the good parts of photochemical manufacturing is that you can make something like a mesh with hundreds of tiny holes that would be impractical with any other approach. It doesn't matter how complex the pattern is. While you might be able to cut such patterns on vinyl cutter worst case by leaving machine to work for few hours, weeding it might be a big problem. After cutting you need to manually peel half of image you don't want (called weeding). For simple large shapes it's not a big deal but for complex cuts that have a lot of holes or maze like structure it can be quite time consuming. There are industrial cutters that can do the weeding automatically, but I don't think any hobby level machines like Cricut have this feature.

    If you have something like a mesh and you are removing mesh part leaving only the tiny dots or pattern with thin long unsupported lines (like a PCB), you need to be very careful to avoid accidentally nudging and separating the small details. This can happen during weeding, transfer to target material and even cutting (for some types of materials). The last one was major problem when I tried cutting copper tape directly, original backing tape was just too slippery, less of a problem for suitable vinyl.

    None of that gives you hard answer, but I hope my experience was of some use to you.

  13. For the last 2 years PyPi (main Python package repository) requires mandatory 2FA.

    Last time I did anything with Java, felt like use of multiple package repositories including private ones was a lot more popular.

    Although higher branching factor for JavaScript and potential target count are probably very important factors as well.

  14. Did you have the labels hidden? Even with that having single shape with corners sorted by the continuity degree might influence the result towards choosing last as best.

    For a proper blind test it would help to have separate physical objects. Maybe even with varying corner sizes so that you can't easily rely on bigger=better intuition for comparing two corners of different objects.

  15. Using regular beziers you should be able to get at least G1. Symmetric smooth cubic bezier nodes should be C1. With regards to G2 Inskcape and Coreldraw have b-splines, not an Illustrator user from what I could find seems like Illustrator lacks it.

    Blender has tools for at least basic nurbs modeling.

    I am slightly skeptical on the need of this in 2d vector drawings (outside few very specific use cases). Big reason for having higher degree continuity of 3d surfaces in industrial design is that looking at reflection in mirror surfaces (like cars) makes the difference very obvious. For 2d drawings you can't really look at the side of it and see 1d reflection.

  16. Python logo actually has 3/4 of swastika. It's only one the vertical line that is missing.
  17. If you look hard enough almost any 4 way rotation symmetry will result in a variation of swastika like shape. You would have to almost completely ban 4 way rotation symmetry to avoid it.

    I personally find it unhealthy to actively search, extend and strengthen the association of hate symbols based on vague similarity out of context. Sure remember the crimes they have done and avoid the exact specific shape, proportions, color commonly used by the hate groups, but also take context into account. Don't promote them by giving them credit for things they didn't do. Don't let the hate groups win by allowing few dozen years of years of activity destroy thousands of years of cultural and language history and future for wide category of symbols and simplest geometric patterns. Don't erase words from common language. Don't let them make your life worse by self inflicted excessive censorship. Grow the good associations not the bad ones, dilute and take away the strength from hate groups instead of letting them take away common language from you. If you look at thousand year old budhist or ancient greek stone carving which uses one of the few dozen swastika variations and think those time traveling Nazis plastering their symbols all over the place they win.

    When looking at children playing with paper pinwheel, is your first thought also they must be Nazi? When you look at cardboard box with 4 flaps overlapped on top of each other do you think Nazi?

    With regards to other people speculation how this happened I doubt they intentionally tried to create a swastika, it just happens naturally when you use rotational symmetry. Looking at this logo I personally see the overall cross and spinning shape formed by positive space first. The image of swastika formed by negative space is kind of weak and clunky due to thickness mismatch created by curved rhombus. If they had used 4 overlapping squares or circles it would be more problematic, at that point a logo designer would likely stop and try to mixup things to get rid of it.

  18. I'd say the unintuive part is mostly a problem only if you abuse fragment shaders for something they weren't meant to be used for. All the fancy drawings that people make on shadertoy are cool tricks but you would very rarely do something like that in any practical use case. Fragment shaders weren't meant to be used for making arbitrary drawings that's why you have high level graphic APIs and content creation software.

    They were meant to be means for more flexible last stage of more or less traditional GPU pipeline. Normal shader would do something like sample a pixel from texture using UV coordinates already interpolated by GPU (don't even have to convert x,y screen or world coordinates into texture UV yourself), maybe from multiple textures (normal map, bump map, roughness, ...) combine it with light direction and calculate final color for that specific pixel of triangle. But the actual drawing structure comes mostly from geometry and texture not the fragment shader. With popularity of PBR and deferred rendering large fraction of objects can share the same common PBR shader parametrized by textures and only some special effects using custom stuff.

    For any programmable system people will explore how far can it be pushed, but it shouldn't be surprise that things get inconvenient and not so intuitive once you go beyond normal use case. I don't think anyone is surprised that computing Fibonacci numbers using C++ templates isn't intuitive.

  19. The term is "physics based model" it has somewhat specific meaning in context of mathemtical/physics modelling. It has nothing to do with all physics required to make MRI work. A model doesnt have to be based on physics to be usefull. You can often get some recognizable image by dumb stronger signal=> brighter pixel logic without fully modelling how why it changes. As long as change in material correlates with change in signal (doesn't even have to happen uniformly) you can get some picture and leave the interpretation to human. A simpler example would be temperature control. You can have simple hysteresis based approch of temperature under threshold turn on heater, above threshold turn off. Or you can have physics based model of what the heating power is, what's the heat capacity of chamber, what's the heat capacity of object, how the temperature diffuses within object, what's the thermal resistance of interfaces between heater, target and temperature sensor. Many everyday systems systems are controlled by generic PID controllers without physically modelling how exactly the process reacts to input, PID can be be considered a mathematical control model with sufficient parameterization to approximate various physical systems. You could also make an AI based model and create a signal processing function that way. For many drones PID coefficients are tuned by hand, it was quite a surprise when one of controller manufacurers had made a physics based model to calculate suitable defaults based on drone mass, momment of inertia and maximum thrust.

    Technically the tittle isn't lying. Researchers created new physics based model which is more detailed and makes less simplifications compared to old physics based model. The qualification also clarifies that potentially sharper image won't be achieved by new device model or a picture of 3d model printed on marketing materials.

  20. Large companies have repeatedly demonstrated that they will pick whichever interpretation is most convenient at the time. When there are pitchforks they will claim that you are confused and misinterpreted the writing but when you get poisoned by food in their restaurant and try to sue them they will point at terms of service on their online video streaming service that your spouse agreed 5 years ago as if that's relevant (not a joke Disney tried that one). These things are supposed to be written by proffesionals, I dont think Hanlon's razor sufficiently explains it, terms of service are at least partially intentionally written as vague and unclear as possible for benefit of one side.

This user hasn’t submitted anything.