- The turbulent times and the breakneck speed of computer development need to be taken into account. Not long before that computer networks were strictly corporate things installed by contractors choosing hardware, driver and software suppliers suitable for tasks performed by employees or students, and someone who installed it at home was the same kind of nerd who would drag an engine from the work into his room to tinker. Non-business-oriented software rarely cared about third party network functions. Then network card became a consumer device, and a bit later it became integrated and expected.
Also, Windows did not install TCP/IP components on computers without a network card (most of them until the Millennium era), it was an optional component. You could not “ping” anything, as there was no ping utility, nor libraries it could call. In that aspect, those network-less Windows systems were not much different from network-less DOS systems. The installer probably still works that way (or can be made to, by excluding some dependencies), but it's hard to find hardware without any network connectivity today. I wonder what Windows 11 installer does when there is no network card to phone home...
- What you're saying is “Fat, ugly, flat-chested, etc. women are not real women, I only care about my kind of true women, please only show me these”. That's exactly what I was talking about. The measure to define women in your head is shaped like that, and you reluctantly decide to let someone pass. Yes, it is common. Yes, it is stupid. What's actually new in that?
- As many others, you believe that by talking about certain things you're directly touching “real”, “material”, “physical” world. So-called humanities — barely even taught to so-called educated public — could help you see how that specific creed of scientism spread in last two centuries, how it was tied to mass education and journalism, and not actually to science, and what made you stick to it (no one told you that the outside world exists). You chose comfortable ignorance.
Being a proud servant of the status quo is neither fresh, nor smart, nor scientific. Illiterate savages worshipped their idols in the exact same fashion.
- > some people -want- men and women to be interchangeable in all respects
That's a generalisation that loses all sense.
For example, you're building giant pyramids out of human corpses. Based on experiments and precise calculations, you know that you need x.y% more female corpses than male corpses, and get really angry when suppliers try to argue that men and women are equal in all respects.
Obviously, this is not the important question. We should ask questions about your activity as a whole instead.
(It might seem that my example is a bit over the top, but people around us do things that are just as bad, and with real enthusiasm.)
Gender issues are not just abstract, they are tied to problems people see in the society. It is not fashionable today to just state that someone is a swine to behave in some manner (and the answer would usually be that you are not allowed to limit anyone's freedoms), so dumb pathetic fences of bureaucratic states are used, and the talk about progress, benefits, equality, shared future, etc. floods the stage. However, you personally still either think that something is right, or that it's wrong.
During “millions of years of evolution”, in many places any woman walking alone without male relative or servant could be treated as a potential free sex toy. It was simply “evident” that anyone could try to rape her, and “everyone” knew that — men, women, kids. It was “natural”, and even the commandment about neighbour's wife could easily be seen as excluding “no one's” or “everyone's” wives, so it was even sanctioned from above. Obviously, it all stemmed from the heads of certain people, and changes introducing consequences mutated that “natural order”.
By the way, the figure of noble male saving damsel from the jerks is considered noble because... he could've joined the party, but choose not to. What a hero! Simple-minded people continue to see things the old way to this day when they expect heroines saved by the hero to have sex with him immediately. Because what are other options, really?
- I think that you misunderstood me. Sadly, philosophy is completely absent from “required” education in our “enlightened” world (or presented as narrow-minded bean counting), so capturing “evident things” as results of the thought process is hard.
By one side of equation I meant all those arguments about “men” and “women” altogether. You are absolutely free to state that men are X, and women are Y, and attribute it to Nature as a whole, or scientific data sliced off of it. There is nothing wrong with that by itself. However, the whole other “stable” side which you try to “fix” by this process is no less of an invention.
Say, we're having an argument whether cucumbers are fruits or vegetables. In that case, we can even reach an “official” answer. But it's more important to realise that the whole stage on which we're playing is constructed. “Fruits” and “vegetables” are convenient man-made classifications. Cucumber does not come with a label “I'm a cucumber, as stated in encyclopaedias, etc.” Nor its atoms come with a label “We're parts of that cucumber thing”, nor anything else (note for our young vulgar materialists).
In my opinion, feminist thought taking that step (which — for multiple possible reasons — was not taken even by greatest thinkers) is the most important achievement. Which “wave” is right, or how to “correctly” display your alignment with “correct” movement according to latest fashions are ancillary questions.
- The idea that there are straightforward “female” and “male” traits should seem quite shaky to anyone who is even a little bit into humanities. The problem is that both sides of the equation are constructs of the mind, both the thing we would like to measure, and the measure itself. Fighting journalistic simplification with journalistic simplification is not the mentioned “pursuit of truth”. So before arguing whether pink is “for silly girls”, or “proud female colour”, it would be great to remember that its association with gender basically only started yesterday.
Even if we assume that there are “standard men” and “standard women”, there's another problem: office politics occurring in country M in century N is most certainly the product of specific culture, and not some cavemen rituals. Problems of Patrick or Patricia Bateman are probably quite alien to a lot of people in the world.
The irony is that the image of “good old days” is itself based on modern day stereotypes. So-called progressive propaganda was quite focused on the caricature of concentrated Bad Masculine Man, and now, freshly painted, it is presented as a positive example (because public is familiar with it, and making public think is too hard).
- 3 points
- It's a system for steganographic traffic proxying plugins that evade advanced detection systems. https://github.com/XTLS/Xray-examples
Chinese, Russian and Persian links could've given you a hint. Though after recent developments in Britain, I'm sure English docs will also appear.
- The exaltation displayed in this discussion thread is something everyone should ponder about. Some stupidity specific to certain era and place on Earth, just another tumour of uncontrolled bureaucracy which always grows, is discussed as some eternal property of God-given Universe.
Hijacked plane is a popular media spectacle with lots of ties to other images and scenes. Millions are ready to discuss it, or listen to the thrilling stories. “This is important for security!” is a shazam in that context. At the same time, much closer and routine dangers directly affecting many people (power plants, refineries, railroads and so on) are kept in check by underpaid workers who can't even make companies fix sensors or replace something until it is rusted through. Effectively, “this is not important for anything”, nor public is interested in TV shows about working pipeline that is not getting blown up. Those who want money and power naturally stick to impressions that work for the crowd they are given.
Propaganda is most successful when people do the required thing on their own, agree that it's absolutely impossible to evade, and even encourage each other. Something in this day and age makes people themselves adore certain forms of propaganda, and even demand to be told specific lies. Among other things, images of stupid social machines crushing someone (“they'll put you on the list”, etc.) seem to weirdly stimulate the crowd.
Even in so-called globalised world there are examples that crack the habituation. In country A, any big gathering of people needs to be formally approved, supplied with hordes of policemen (thankfully, not tanks), fences (thankfully, not barbed wire), entrance searches (thankfully, without stripping). When you ask anyone about that, they promptly respond with “What if terrorists/enemies decide to attack the crowd?” or “What if they start to riot?” (notice that “they”), etc. Even most obvious security theatre acts are automatically accepted with promotion to “psychological stuff that helps to detect those people in the crowd”. In country B, no less “civilised”, the same event is handled by some private company that is mostly worried about portable toilets or electric generators, and people come freely to the venue if they like it (just buy the ticket).
The odds of something wrong happening are roughly the same, but people reason about themselves and those around them very differently. That mental picture of the world shapes the thing that happens, not the alleged expert opinions or calculations.
- Many years ago, some dial-up providers in my city offered free public logins to use their websites (for scratch card activation, account renewal, user guides, and so on). Some companies also paid ISPs to have their sites and services accessible in similar fashion for promotional reasons.
At a certain provider, all those free logins used the same firewall configuration to only allow traffic to those free services and ISP site, probably for simplicity, so all of them were accessible with any promotional login. Most of them were not useful (to me), but different agreements with ISP resulted in different call time limit until hang-up, 10-15 minutes instead of 3-5.
However, the main treasure was the addition of external page translation service as a feature on some big site. Back then, it was strictly static and server-side, URL in request gave you its HTML source with translated text strings and absolute paths to external resources, so in order for translation to work, users needed to be able to access that third party server, too. Obviously, if you gave it any other URL, the server would also grab it to translate (and choosing least similar language in parameters would leave most of the page text intact).
You can imagine that having a browser supporting tabs and switching media off was very handy for loading as many free web pages in text only form as those dial-up sessions allowed.
Obviously, WWW-to-email services for people who only paid for mail server access had existed even before that.
- Yes, you need to test the exact protocol you want to use. This means tcping/curl, TLS with proper certificates and SNI domains, etc.
However, just as you make sure that the power supply actually supplies power before dismantling something that refuses to work down to the last washer, repairing network problems should start with the basics. Simple test that does not work, or shows something nonsensical, is a great hint that you forgot something, or should start digging elsewhere.
- Limiting availability of third party services based on local service provider fee can only be done 100% reliably on a service side through an agreement with that provider, i.e. WhatsApp needs to disable certain functions to users coming from certain dedicated links or IP ranges, or even based on live user status metadata. There's an obvious size mismatch, and lack of incentive to implement compartmentalisation only needed for some other company. It also creates enormous shared responsibility and potential circular finger pointing clown shows, all for relatively tiny number of affected paying users.
Therefore, it is either done with least amount of work that is “good enough”, and can be done on a cheapest router (rate limit to the absolute minimum, ban connections to ports 80 and 443, maybe cut the traffic to most stable IP ranges of biggest services, and regular person is going to state that “nothing else works”), or trough very extensive commercial DPI with lots of guessing and ad-hoc rules (if this feature is important for the income, and many will try to game the system). So it's either going to be as simple as in this example, or you'll compete with the global army of detection rule authors.
Though I do like the wink-wink, nudge-nudge choice of proxy software.
- IBM never had plans for PCs to be graphically impressive (unlike other gaming-oriented microcomputers which had hardware sprites, buffer scrolling, and various special modes), so conceptually its video cards were pretty simple: you change values in video memory, and letters or pixels on screen change accordingly. The rest was on you. Initial setup could be complex (or hidden behind Video BIOS calls to specific firmware), along with memory layout (for various reasons, including production costs), but the thing was basically just an output buffer.
There were some accelerators providing mind-blowing functions like drawing a line on screen given its start and end coordinates in hardware, or filling a rectangle with a given colour in hardware. If you used professional CAD programs where taking drawing routines away from CPU meant going from 1-2 fps to “tolerable” object movement, there was a point in paying significant sum of money for such niche device. Later, Windows strongly suggested to graphic card makers to implements most often used window drawing primitives in hardware, and offer them to the system via standard driver interface. That also was of little use to most games.
Non-standard modes and hacks to manipulate output settings to make screen scroll without redrawing the whole buffer (which was too slow), and other tricks, like changing mode parameters precisely at the moment some specific part of the image was being sent to the monitor, were sometimes possible, but they don't apply here. Doom engine games render the upper world window on CPU frame by frame, writing resulting pixels to the video memory (status bar can be partially updated, or just ignored for our case). So it's a simple stream of writes, and the difference is in how fast they are processed.
What could be different?
— Memory subsystem and address mapping. Not sure about the details, but some hardware settings could possibly tell the motherboard chipset to ignore the caching/order/coherency, and instantly return with success from any write while it was in flight. That would mean that any program that also needed to read from video memory could get incorrect or corrupted results sometimes. (Though I contradict myself here: spectres and screen wipe effect in Doom needed to read from the screen buffer.)
— System bus type, width, and rate. Smaller delays means more frames per second (if images can be calculated fast enough).
— Video memory type, rate, and timings, video chip speed and architecture, internal bus width, presence of caching buffers, and so on. Most important differences, and there are benchmarks of VGA cards in contemporary magazines and on modern retro forums.
However, the video card itself couldn't make your CPU work with twice the performance, it only could limit the amount of data that went to screen. I suspect that either the card was not very compatible, and traded simplicity and performance in certain graphical modes for bugs and slowdowns in others, or the numbers you saw were actually incorrect. For example, if video card forced 60 Hz refresh rate in canonically 70/75/85/etc Hz mode, program could start calculating nonsensical delays, insert rendering cycles that did nothing, and show crazy stats.
- No, that's just a reminder that you had a choice, and chose empty talk about “ecosystems” over ability to control what you can see on “your” screen. You've stepped on a rake once, you got some experience, why repeat it over and over again?
- The error states that the window can't be created. It might be the problem with parameters to the window creation function (that should not depend on game state), or maybe the system is out of memory. Resources allocated in memory are never cleaned up because cleanup time overflows?
Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?
There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.
Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.
Would be hilarious if it really is such an easy mistake.
- 4 points
- Extending the user shell was always for the benefit of Microsoft first, and for the rest of us second. Companies who tried to add serious advanced functionality through that found out that they were not really welcome, even though in theory everything was object-oriented, loosely coupled, language-agnostic, yada yada. It probably was mainly a solution to decrease the amount of territory wars over code inside Microsoft, and let teams work independently without demanding synchronous fixes.
We shouldn't forget that half of the documentation on how everything should work was only released after anti-trust investigation and pressure from courts.
https://www.geoffchappell.com/studies/windows/shell/index.ht...
By that time, Windows had already switched to shiny fresh undocumented technologies.
You should remember that a lot of software around the Millennium simply re-implemented latest fashion trends (Office controls, XP styles, etc.) on their own in not quite exact ways, not just because programmers back then wore skins, and ate raw meat, and feared nothing, and because such third party toolkits were available commercially, but also because official interfaces to do the same were not offered until later releases (or sometimes ever). I vaguely recall discussions of teams within Microsoft sometimes doing the same, resulting in Office controls sharing no code with common controls used by the rest of the system which look exactly the same.
The irony is that web browsers, being the most popular kind of application, and interfaces built on top of their technology simply ignore the native interface toolkits, and do everything independently, which is even more extreme than some old school custom paint handlers. A lot of work has been spent on re-implementing native look and behaviour there, multiple times.
One of the examples of actively supported applications interfacing with shell object hierarchy without using system dialogs I can name is Tixati, which needs to have file picker(s) at hand inside its own dialogs. It's not open source, but it has been using GTK, custom controls, and probably some sauce on top. Given the user complains about its performance or missing items, and regular appearance in change log over the years, I'd say it's a wrestling game for the author. At the moment the tree is lazily populated at the first appearance, which is fine, but not unnoticeable.
It is a bit hilarious that showing a list of file names in some directory is a trivial example on using system-provided iterators for the novice programmers, but adding icons matching those that Explorer shows (handling links and other special files, special directories, non-filesystem-based locations which still have files user might want to choose in them, etc) to that list suddenly becomes a nightmare.
- There was a guy, Snowden or something, who got some first party reports. They stated that no magical quantum crypto breaking happened at global scale, keys were simply stolen, or backdoors were used to access clear text on sender or receiver.
Ephemeral keys (not stored for possible future leakage) quickly became the default, and assumptions about global data gathering changed. Then, all of a sudden, “free” service appears that makes all of TLS improvements, bug and small, practical and theoretical, useless. What a coincidence!
For some reason, you assume that people who have been stealing everything they can (because doing crime for the Big Guy is not a crime) consider this specific company untouchable. This is impossible. Every country in the world wants to have its spying capacity at maximum (following the shameless example), and to flex muscles at American services doing the same. The reason we only read about clashes over movie piracy and other petty stuff is because more serious matters have been discussed and dealt with.
Facebook offers “free” hosting and other services for individuals (social networks are poor walled versions of the Web). Cloudflare offers “free” CDN and other services for website owners. Actual business model is the same, lies are still lies.
- This sounds like a job for named pipes. You get the temporary file, but nothing is actually written to disk. Or maybe unnamed pipes, if bash command redirection is suitable for creating the list of options.
Looking back, it's unfortunate that Unix authors offered piping of input and output streams, but did not extend that to arbitrary number of streams, making process arguments just a list of streams (with some shorthand form for constants to type in command line, and universal grammar). We could have been used to programs that react to multiple inputs or produce multiple outputs.
It is obvious that it made sense in the '70s to just copy the call string to some free chunk of memory in the system record for the starting process, and let it parse those bytes in any way it wants, but, as a result, we can't just switch from list of arguments to arbitrary stream without rewriting the program. In that sense, argument strings are themselves a workaround, a quick hack which gave birth to ad-hoc serialisation rules, multi-level escaping chains, lines that are “too long” for this random system or for that random system, etc.
Excuse me, what you've just said? Who decided on “Cloudflare's importance in the Internet ecosystem”? Some see it differently, you know, there's no need for that self-assured arrogance of an inseminating alpha male.