Preferences

RaftPeople
Joined 418 karma

  1. > including "no development branches"

    Can you explain this comment? Are you saying to develop directly in the main branch?

    How do you manage the various time scales and complexity scales of changes? Task/project length can vary from hours to years and dependencies can range from single systems to many different systems, internal and external.

  2. > They are not, by definition. You provided proof for it yourself: you mention the "body of knowledge [...] above that", so they really aren't the topmost layer

    I said "shallow", not "topmost".

    > That's why you learn the fundamentals. So you can understand the refinements and applications of them at first glance.

    Can you explain when (if ever) a person should use an OOP approach and when (if ever) he/she should use a functional approach to implement a system?

    I don't think those fundamentals listed above help answer questions like that and those questions are exactly what the industry has not really figured out yet. We can see both pros and cons to all of the different approaches but we don't have a body of knowledge that can point to concrete evidence that one approach is preferred over the many other approaches.

  3. Fun stuff. I built a system like this for artificial life years ago (neural network was the brain).

    I'm curious how you handled the challenges around genotype>>>phenotype mapping? For my project the neural network was fairly large and somewhat modular due to needing to support multiple different functions (i.e. vision, hearing, touch, motor, logic+control, etc.) and it felt like the problem would be too challenging to solve well (to retain general structure of network so retaining existing capabilities but also with some variation for new) so I punted and had no gene's.

    I just evolved each brain based on some high level rules. The most successful creatures had a low percentage chance of changing any neuron/connection/weight/activation function/etc. and less successful creatures had a higher percentage chance of changes with the absolute worst just getting re-created entirely.

    Things I noticed that I thought were interesting, wondering what things you've noticed in yours:

    1-Most successful ones frequently ended up with a chokepoint, like layer 3 out of 7 where there was a smaller number of neurons and high connectivity to previous neurons.

    2-Binary/step activation function ended up in successful networks much more frequently than I expected, not sure why.

    3-Somewhat off topic from digit recognition but an interesting topic about ANN evolution: how to push the process forward? What conditions in the system would cause the process to find a capability that is more advanced/indirectly tied to success. For example, vision and object recognition: what is a precursor step that is valuable that the system could first develop. Also, how to create a generic environment where those things can naturally evolve without trying to steer the system.

  4. > Are you really expecting an answer here? I'll answer anyway.

    Yes, and thanks for the examples, it's now clear what you were referring to. I agree that most of those are generally good fundamentals (e.g. wrong states, error handling, time+space), but some are already in complex territory like mutability. Even though we can see the problem, we have a massive amount of OOP systems with state all over the place. So the application of a principle like that is very far from settled or easy to have a set of rules to guide SE's.

    > The software engineers' body of knowledge can change 52 times in a year

    Nah, those changes are only in the surface, at the most shallow level.

    I think the types of items you listed above are the shallow layer. The body of knowledge about how to implement software systems above that (the patterns and approaches) is enormous and growing. It's a large collection of approaches each with some strengths and weaknesses but no clear cut rule for application other than significant experience.

  5. > Once again, that's only true at the surface level.

    Can you provide concrete examples of the things that you think are foundational in software? I'm thinking beyond "be organized so it's easier for someone to understand", which applies to just about everything we do (e.g. modularity, naming, etc.)

    For every different approach like OOP, functional, relation DB, object DB, enterprise service bus + canonical documents, microservices, cloud, on prem, etc. etc., they are just options with pros and cons.

    With each approach the set of trade-offs is dependent on the context that the approach is applied into, it's not an absolute set of trade-offs, it's relative.

    A critical skill that takes a long time to develop is to see the problem space and do a reasonably good job of identifying how the different approaches fit in with the systems and organizational context.

    Here's a real example:

    A project required a bunch of new configuration capabilities to be added to a couple systems using the normal configuration approach found in ERP systems (e.g. flags and codes attached to entities in the system controlling functional flow and data resolution, etc.). But for some of them a more flexible "if then" type capability made sense when analyzing the types of situations the business would encounter in these areas. For these areas, the naive/simple approach would have been possible but would have been fragile and difficult to explain to the business how to get the different configurations in different places to come together to produce the desired result.

    There is no simple rule you can train someone on to spot when this is the right approach and when it is not. It's heavily dependent on the business context and takes experience.

  6. > Nah, those changes are only in the surface, at the most shallow level.

    Very strongly disagree.

    There are limitless methods of solving problems with software (due to very few physical constraints) and there are an enormous number of different measures of whether it's "good" or "bad".

    It's both the blessing and curse of software.

  7. > Could give examples of use cases where dense 3d packing is needed? (Say, besides literal packing of physical objects in a box? )

    Not an answer, but something interesting on this topic:

    In a warehouse/distribution center, a dense packing result can be too time consuming for most consumer products. As density increases, it takes the human longer to find their own solution rapidly that works. You can provide instructions but that is even slower than the human just doing their best via trial and error.

    We had to dial back our settings from about a 95% volume consumption percent (initial naive setting) down to about 80% before they could rapidly fill the cartons. Basically it's balancing cost of labor vs capacity of system during peak (conveyor would start backing up) vs shipping costs.

  8. I got the same vibe.
  9. > This individual was holding it as if it were a weapon

    When asked to clarify the communications officer provided the following evidence:

    "As you can see in this image the student is squinting a little bit, exactly what you would do if you were aiming."

    "Also, you can see by his stance that he is trying to create stability with one foot a little forward."

    "I wouldn't be surprised at all if this was actually a trial run and he is testing the AI for weaknesses."

  10. > The main difference therefore between error and warning is, "We didn't think this could happen" vs "We thought this might happen".

    What about conditions like "we absolutely knew this would happen regularly, but it's something that prevents the completion of the entire process which is absolutely critical to the organization"

    The notion of an "error" is very context dependent. We usually use it to mean "can not proceed with action that is required for the successful completion of this task"

  11. > I can't think of any time obvious unintended behaviour showed up not caught by the contract encoded in tests

    Unit testing, whether manual or automated, typically catches about 30% of bugs.

    End to end testing and visual inspection of code are both closer to 70% of bugs.

  12. > In the real world I would not have a shared "money library" to begin with. If there were money-related operations that needed to be used by multiple services, I would have a "money service" which exposed an API and could be deployed independently.

    Depending on what functionality the money service handles, this could become a problem.

    For example, one example of a shared library type function I've seen in the past is rounding (to make sure all of the rounding rules are handled properly based on configs etc.). An HTTP call for every single low level rounding operation would quickly become a bottleneck.

  13. > There were some reasonable concerns. Using tables for both layout and literal tables removes semantic meaning

    The simple solution:

    <table type="layout"> (or "data")

  14. > At the SMB scale accountants are mostly paid to coach/pester/goade the employees to hand in the necessary paperwork in time.

    The perfect job for AI.

  15. I did the exact same thing.
  16. > But nobody really teaches the distinction between two passages that happen to have an identical implementation vs two passages that represent an identical concept, so they start aggressively DRY'ing up the former even though the practice is only really suited for the latter subset of them.

    Even identical implementations might make more sense to be duplicated when throwing in variables around organizational coupling of different business groups and their change mgmt cycle/requirements.

  17. > In software, optimizing for speed works best in cases where architecture has minimal relevance for product outcomes.

    The other consideration is the impact of low quality on the business.

    Generally, I find that cleaning up issues in production systems (e.g. transactions all computed incorrectly and flowed to 9 downstream systems, incorrectly) far outweighs the time it takes to get it right.

    Even if the issue doesn't involve fixing data all over the place and just involves creating a manual work around, that can still be a huge issue that requires business people and systems people to work out an alternate process that correctly achieves the result and gets the systems into the correct state.

    The approach I've seen that seems to work is to reduce scope and never reduce quality. You can still get stuff done rapidly and learn about what functions well for the business and what doesn't, but anything you commit to should work as expected in production.

  18. > And boy to the people making the decisions NOT want to hear that.

    You are 100% correct. The way I've tried to manage that is to provide info while not appearing to be the naysayer by giving some options. It makes it seem like I'm on board with crazy-ass plan and just trying to find a way to make it successful, like this:

    "Ok, there are a few ways we could handle this:

    Option 1 is to do ABC first which will take X amount of time and you get some value soon, then come back and do DEF later

    Option 2 is to do ABC+DEF at the same time but it's much tougher and slower"

  19. > While hardware folks study and learn from the successes and failures of past hardware, software folks do not

    I've been managing, designing, building and implementing ERP type software for a long time and in my opinion the issue is typically not the software or tools.

    The primary issue I see is lack of qualified people managing large/complex projects because it's a rare skill. To be successful requires lots of experience and the right personality (i.e. low ego, not a person that just enjoys being in charge but rather a problem solver that is constantly seeking a better understanding).

    People without the proper experience won't see the landscape in front of them. They will see a nice little walking trail over some hilly terrain that extends for about a few miles.

    In reality, it's more like the Fellowship of the Rings trying to make it to Mt Doom, but that realization happens slowly.

  20. > the main character is incapable of dealing with things inside him that he doesn't understand

    Exactly.

    To get to the point where he can really believe that the abuse was "not his fault" requires time and effort. If the therapist had just told him that day 1 it would not have had the same effect.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal