Preferences

gregmac
Joined 9,763 karma
Simplicity is underrated.. YAGNI.

  1. > To put a bit more colour on this, I think the fear of most devs with an ultra-simple framework like this is that eventually you hit a wall where you need it to do something it doesn't natively do, and because the only thing you know is these magical declarative hx-whatever attributes, there's no way forward.

    I'm not sure "fear" is exactly the right word here, but it's something I consciously evaluate for when looking at any new framework or library. Anything that has a lot of "magic" or "by-convention" type config is subject to this.

    You start with the "hello world" example, and then you hit that wall. The test of an awesome framework vs a fundamentally limited (or even broken) one is: can you build on what you have with extensions, or do you have to rewrite everything you did? There's a lot of these where as soon as you want to do something slightly custom, you can't use _any_ of the magic and have to redo everything in a different way.

    This isn't just libraries either. Another almost worse example is AWS Elastic Beanstalk. Simple way to get an app up and going, and it handles a lot of the boilerplate stuff for you. The problem is as soon as you want to extend it slightly, like have another custom load balancer route, you actually have to completely abandon the entire thing and do _everything_ yourself.

    This is a really hard thing to get right, but in my view is one of the things that contributes to a framework's longevity. If you hit that wall and can't get past it, the next project you do will be in something else. Once enough people start posting about their experiences hitting the wall, other people won't even pick it up and the framework dwindles to a niche audience or dies completely.

  2. > if your source code is based on newer .NET you have to update to a new version each year

    .NET has a really refreshingly sane release life cycle, similar to nodejs:

    - There's a new major release every year (in November)

    - Even numbers are LTS releases, and get 3 years of support/patches

    - Odd numbers get 18 months of support/patches

    This means if you target LTS, you have 2 years of support before the next LTS, and a full year overlap where both are supported. If you upgrade every release, you have at least 6 months of overlap

    There's very few breaking changes between releases anyway, and it's often in infrastructure stuff (config, startup, project structure) as opposed to actual application code.

  3. It's good to move to Zigbee/thread/z-wave anyway because they're all better protocols for smarthome stuff. Plus wifi means you might be buying stuff that relies on cloud, which is a non-starter for anyone that doesn't like buying future paperweights.

    But your criticisms are strange. You have more than 254 devices connecting (which implies a complex setup) but can't increase the subnet size? Or does your router just have an absurdly small default DHCP range?

    I also don't understand the swap your router problem, unless you're also using default SSIDs and not changing it. Configure the SSID and PSK to be the same as before and everything will just work.

  4. > there’s not good support for very low power devices that use WiFi

    That's why we have Thread. Wifi just isn't a very efficient protocol for using with deep sleep. The radio takes more power to run, the overhead of connecting is higher, and the device needs a full IP stack. Even with power save mode (if supported by client and AP), the radio is on for hundreds of milliseconds to send a message.

    Thread has "sleepy end device" profile built-in where the hub will queue messages and expects the device to be in deep sleep most of the time. And since it doesn't have so much overhead, the radio only has to be on for tens of milliseconds.

  5. When I first saw the iPhone I remember thinking how silly it was that a device calling itself a "phone" only had the phone function as of many apps. Other phones had internet and other features, sure, but their "home" screen, so to speak, was a phone UI. You had to hit "Menu" or something else to see the other apps, which were clearly secondary to the primary phone function.

    The iPhone felt more like a general portable computing device that happened to also function as a phone.

    Even the Blackberry up to that point still felt more like an "email/phone device" primarily (though funny enough, I never had a Blackberry myself until after the iPhone came out).

    The irony now, and I suspect many people are like this, is my "phone" is barely ever used as an actual phone. It's a computer with a data plan. I am way more likely to use some kind of internet-based voice/video chat than make or take a phone call.

    My phone icon is still on my home screen, but only because it is something I want to be able to get at quickly in an emergency. I'm certain it's the least-used icon on the screen, though.

  6. It sounds like you think I'm victim-blaming here and that's not my intent at all.

    Part of being in business is anticipating risks and having a plan -- which could be deciding to accept the risk. What sucks is you're implicitly accepting the risk of anything you didn't think of, even if the seller is quite aware or even counting on it. It's a harsh lesson when something this happens.

    Slack are leveraging their position and it makes them assholes (or capitalists, I suppose, depending on your point of view), but you can't control what they do. You can only control your choices.

  7. > refuse to let them export it

    Honestly, it's hard to feel too bad for people making the choices to use this stuff without considering an escape plan or safety net and then getting burned by it.

    You choose to not get fire insurance on your house, your house burned down... like yeah, that sucks, I do genuinely feel bad that happened to you. But also, you took a risk presumably to save money and it bit you in the ass, and now you unfortunately have to pay the price.

    Sometimes SaaS really does make the most sense. Having your people doing part-time, non-core operations of an important service they are not experts in can be a huge distraction (and this is a hard thing for us tech people to admit!).

    But you need to go into SaaS thinking about how you'd get out: maybe that's data export, maybe it's solid contracts. If they don't offer this or you can't afford it... well, don't use it. Or take the risk and just pray your house doesn't burn down.

  8. > for example, I've seen N+1 problem sneak past because at the time the performance was good enough but once there was enough data it crumbled

    This is a super-common problem I've had to help with many times. It generally happens with teams working on single-tenant (or single-database-tenant) products, and basically always comes back to the dev team working on a database with hundreds of things, and there's a handful of customers using 10,000+ things when it starts getting slow.

    You acquire a bunch of small customers and everything is great, and it can months later before you really start to see performance impact. And often with these types of problems it just gradually slows down until finally the system hits a wall where the DB can no longer compensate and suddenly performance tanks. "It must be caused by something we just added, but it's weird, because this problem is in a totally different part of the system."

  9. Am I the only one struggling to decipher this?

    I thought web3 was supposed to be some kind of decentralized compute, where rather than run on your own hardware or IaaS/PaaS you could make use of compute resources that vary wildly day-to-day in availability, performance, and cost, because they were somehow also mining rigs or something? But it's "decentralized" because there's not one entity running the thing.

    There is not a mention of that in the article.

    Is it actually supposed to just be microtranscations paid with cryptocurrency? Where's the "decentralized" part of that?

    Anyway, instead the best I can see this article seems to be talking about how it turns out people aren't using blockchain for buying things, and makes the (apparently) shocking conclusion "the one thing people always wanted: money that just works."

  10. And you're used to the weight of the glass, which you instantly recognize when you pick it up. If it was a different weight than you were expecting, you'd probably slow down and be more deliberate.

    If you were to just do the exact same robotic "throw" action with a glass of unexpected weight you'd maybe not throw hard enough and miss, or throw too hard and possibly break it.

  11. > It's so shitty and slow because it's a bloatware

    Bloatware is unwanted software, usually pre-installed or otherwise not installed by the user, that slows down your computer and takes up space.

    So if a user wants Office, it is, by definition, not bloatware.

    Even if we do consider it bloatware -- pre-installed, unwanted by the user, and using up system resources -- that isn't an explanation of why Office itself is slow.

  12. > As the neighborhood becomes denser, rents might go down, but land values will go up, since a given piece of land can now be used to build a more profitable apartment building.

    You're taking about the wholesale destruction of neighborhoods, as they are, to be replaced with something else. The land value might go up, but the existing house value goes down, especially halfway through such a project. It's one thing if you're replacing low density multitenant buildings with higher-density ones, but it's a totally different story if you're replacing detached, single family homes. Usually the latter only happens after a decade or 3 of neglected, low value homes.

    Higher density brings pluses and minuses.. more businesses, transit and other services, but also more traffic, crime, noise, etc.

  13. Yeah, I was afraid for a second there. I have a few Wiz bulbs and was hoping that ecosystem wouldn't suddenly die
  14. Yep, agreed, and IMHO in a lot of cases, as-is, terraform wouldn't be viable as a closed-source product. At least, I would have got frustrated and ditched it.

    I haven't personally found a real bug in terraform or a provider yet, but I've had to refer to the source many times to figure out what is actually happening. It's always been either misuse on my part, or drift that the provider couldn't resolve.

    I still consider it a failure though if it takes looking at source code to figure out what's actually going on -- whether it's vendor or in-house. The ironic and annoying part is it usually takes a deeper level of knowledge to write better error messages, but the people with that knowledge don't have the perception of it being a problem. I fight this battle internally with my own teams all the time. The problem is not getting people to make a change, but recognizing that the message is misleading/confusing/unclear to their users (eg: developers who are not domain experts like them) in the first place.

  15. Having debugged this sort of thing before, it's actually really hard to figure that out.

    The entire stack is kind of bad at both logging and having understandable error messages.

    You get things like this:

        ╷
        │ Error: googleapi: Error 400: The request has errors, badRequest
        │ 
        │   with google_cloudfunctions_function.function,
        │   on main.tf line 46, in resource "google_cloudfunctions_function"         "function":
        │   46: resource "google_cloudfunctions_function" "function" ¨
    
    Is this a problem with the actual terraform or passing a variable in or something? Is it a problem with the googleapi provider? Is it a problem with the API? Or did I, as the writer of this, simply forget a field?

    In complex setups, this will be deep inside a module inside a module, and as the developer who did not use any google_cloudfunctions_function directly, you're left wondering what the heck is going on.

  16. Good point, but it's still a very useful way to ensure it doesn't get swapped out underneath you.

    Transitive dependencies are still a problem though. You kind of fall back to needing a lock file or specifying everything explicitly.

  17. > Where you end up in this spectrum is a matter of cost benefit. Nothing else. And that calculation always changes.

    This is where I see things too. When you start out, all your value comes from working on your core problem.

    eg: You'd be crazy to start a CRM software business by building your own physical datacenter. It makes sense to use a PaaS that abstracts as much away as possible for you so you can focus on the actual thing that generates value.

    As you grow, the high abstraction PaaS gets increasingly expensive, and at some point bubbles up to where it's the most valuable thing to work on. This typically means moving down a layer or two. Then you go back to improving your actual software.

    You go through this a bunch of times, and over time grow teams dedicated to this work. Given enough time and continuous growth, it should eventually make sense to run your own data centers, or even build your own silicon, but of course very few companies get to that level. Instead most settle somewhere in the vast spectrum of the middle, with a mix of different services/components all done at different levels of abstraction.

  18. I echo this. No pull requests is awful. The only time it's worked well for me is with 2 or 3 people sitting next to each other, with the same mindset and coding style.

    Every other time I've seen or worked with teams doing it, their codrle is, well, bad. It "works" but it is full of stuff half-done, "we'll clean that up later" - except it's been there for 3 years. And I'm looking at it because I'vr narrowed down a production problem I was called in to debug, that turns out to be crappy error handling with terrible logging that mislead everyone on what was going on. A proper PR should have flagged that and asked for something slightly better than logging "something went wrong" in an try..catch statement that spans many hundreds of lines of code.

    Small, focused PRs are good. Easy to review, code gets merged fast, conflicts are minimized. Massive PRs are bad, they are hard to review (problems get missed) and slow to get approved. If they get reverted because of a problem it's a mess to fix. PRs that do multiple separate things (fix two unrelated bugs, add a feature, and reformat spacing in 30 files) are impossible.

    If PRs are small and focused, the duration of time the branch is open, number of commits and the actual branching model does not matter.

    Long lived branches are a pain to the author (they're who has to merge Main and resolve conflicts), but that's their choice.

  19. 5% off your next lunch and 5% off your next car are very much not the same thing.
  20. Yeah, at the core of it I see this as a value problem.

    If you charge a lot, it's a tough or impossible sell for users who aren't yet sure they'll get that amount of value from it. If you charge too little, you're leaving money on the table from big customers who would be willing to pay much more.

    Setting the up-front cost also requires you to estimate a bunch of things: what will it cost you to build (including future time to get to "feature complete" for this version), how many do you think you are going to sell, how much time is each customer going to take up? In other words: this is what you value your time at, but you can't know most of the numbers used ahead of time. You need this for subscriptions, too, but there's a bit more latitude to change, and you don't necessarily have any obligations should you decide to just stop at the end of the next billing cycle.

    Longer term, there's also an incentive problem for you as the vendor. If you're very successful and saturate your market, why build new versions? Your incentive switches to making a "major" version with huge upgrades, which has a whole ton of downsides (which smart customers see, or learn the hard way). It's riskier than frequent, small releases: your first major real testing and feedback comes after a ton of massive changes. It incentives change for the sake of change (so you can justify a "major" version) as opposed to real improvements. Even fixing bugs becomes purely a cost, the only real incentives are pride/reputation, and hoping they'll buy the next major version.

    Subscriptions help even this out, and tying the cost to some usage metric can make the cost reflect the value even more, even as the usage changes over time (eg: the customer grows).

    The worst thing with subscriptions is when the cost doesn't reflect the value. If as a user, you pay $20/mo for something that enables you to make $2000/month, that's a no-brainer. When you have to pay $20/month for something that is useful 4 or 5 times a year, or when it's really hard to figure out what, if any, value you're getting for your money, that's when it becomes a problem.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal