Preferences

nicois
Joined 137 karma

  1. A large percentage of git users are unaware of git-absorb (https://github.com/tummychow/git-absorb). This complements just about any git flow, vastly reducing the pain of realising you want to amend your staged changes into multiple commits. This sits well alongside many TUIs and other tools, most of which do not offer any similar capability.
  2. This is exactly what I want when baking bread: I have a fixed sequence of steps, spaced quite far apart, and this is pretty much perfect: a series of relatively short breaks when autolysing and kneading, then waiting 10 hours overnight, then waiting 75 minutes after proofing.

    I'm not sure how well this will work on a mobile; the service worker might be stopped after a few hours, particularly with the screen off overnight

  3. This would be more impactful if we could see the cost to US purchasers was actually 39% more. Sadly some manufacturers spread the cost across all consumers, which actually means non-US customers are actually paying some of the tariff costs too.
  4. Is there a risk that this will underemphasise some values when the source of error is not independent? For example, the ROI on financial instruments may be inversely correlated to the risk of losing your job. If you associate errors with each, then combine them in a way which loses this relationship, there will be problems.
  5. Nit-picking, but a high signal to noise ratio is desirable, indicating low levels of noise compared to signal, not the reverse.
  6. Despite what you get in Australia being pretty reliable, it's too expensive to justify quite yet. My 8kW solar is connected to a Fronius inverter, but until I find a less expensive option I can justify adding a battery.

    A 13kWh system is over $AUD10k, and the ROI is on par with the expected lifespan of the battery.

    If sodium cells can bring the price down to $AUD100 it would indeed be a massive game changer.

  7. It would be interesting to know how effective Profile Guided Optimisation is here.
  8. I've also found htmx a great way to retain server side rendering with minimising the recalculation cost of client changes.

    By avoiding needing to add lots of client side logic to still get very low latency updates, it's given me the best of both worlds.

    The way Django's template system works also makes it so easy to render a full page initially, then expose views for subcomponents of that page with complete consistency.

    On a tangent, Django's async support is still half-baked, meaning it's not great for natively supporting long polling. Using htmx to effectively retrieve pushed content from the server is impeded a little by this, but I slotted in a tiny go app between nginx and gunicorn and avoided needing async he'll without running out of threads.

  9. Doesn't this model fail to account for seasonal variations in the locus of the sun? The optimal angle will vary across the year, whatever the latitude.

    Maybe I'm missing something, but i would use a simpler algorithm which doesn't need ML. On day 0, plug in the latitude and allow the system to traverse the range of angles, finding the optimal one at the time - ie: yielding maximum power. Let it run 3-5 times during the day, then fit those points to the theoretical path of the sun across the sky. Now your system is calibrated, without needing any other input. As the seasons change, the system will always know which angle to face for optimal power.

  10. Which is the cause, and which the effect? Was apple's incorrect autocorrect causing them to go to the wrong place initially, or did they autocorrect list get built from the incorrect attempts?
  11. The inter-request integrity guarantee is nice, but you're right that on its own it seems something many devs don't value - or at least don't consider.

    The main driver for getting it done is mostly to set the groundwork for the websocket autosave feature, which would be exceeding dangerous without this.

  12. I can't speak for Channels itself, but I'm quite comfortable that my use of it is not exotic, so is less likely to hit rough edges. The pattern of use I have chosen (having each websocket endpoint subscribe to layer groups based on the model instances the form is using) means it's quite clean.

    Remember that this is not a SPA, so websockets are recreated each time a page containing models marked for dynamic updates is served. This means there should be fewer problems associated with long-lived connections, and any instability will have more limited impact.

  13. Over recent weeks there have been a few HN posts relating to Django and how it interacts with "modern" ecosystems. Over Christmas I began working on a Django extension which would help leverage some features usually seen only in javascript-heavy websites, without the pain of writing custom code, in investing in browser-heavy automated tests.

    The "killer" features I am hoping to bring to the Django community is the ability to have strong data consistency assurances between requests when editing data, as well as realtime server-driven validation of form inputs, alongside optional realtime "autosaving" of fields as changes are made. You can see how this looks here: https://user-images.githubusercontent.com/236562/153730557-a...

    I have also attempted to minimise how much an existing codebase needs to be altered to use or test this. No database schema changes are required, and mostly it's just a matter of changing import statements from "django..." to "nango..."

    This is very much a proof-of-concept at the moment and certainly not fit for production, and I welcome all suggestions and critiques. A quickstart script is included in the repo which should minimise the pain in running the code locally.

  14. This is also my pattern. To further assist with this, I wrote a short(ish) rebase script intended to be run when you want to squash your series of commits, also bringing your local branch up-to-date with the upstream. It relies on your initial commit in your feature branch having a commit message which corresponds to the branch name, but that's it. This does a great job of minimising unnecessary merge conflicts, even after working offline for an extended period.

    https://gist.github.com/nicois/e7f90dce7031993afd4677dfb7e84...

  15. Hopefully ECS/fargate will also be supported soon. I tried shifting our CI workers to ARM but it resulted in not being able to use them to build ECS images, which was not great.
  16. So this is a big step forward in terms of avoiding the race condition where CI runners would accept new jobs during scale-in operations. But how do you ensure you only spawn new ephemeral runners as jobs become available? The webhook provides part of the answer, but do we need to use something like redis to ensure exactly one runner per queued job is started?
  17. We have recently switched to GitHub actions and in addition to the above, there are two others which impact us:

    Sometimes the checkout action fetches the wrong commit! The SHA in the GitHub environment variable says one commit, but the actual code is different(!). Because we don't know why this happens we basically need to do an unshallow fetch of the whole repo to be sure we have what we expect.

    Using autoscaling self hosted runners, it is not currently possible to instruct the agent to finalise the current job but to accept none after it. This is essential to avoid broken workflows while scaling in. Gitlab supports this via a signal, but there is no equivalent.

  18. Is it possible you haven't added the bounding box for your geometries? The GIST index will use your bounding box to optimise the queries, but only if it finds one. Also make sure your query is actually using the index.
  19. The redis/memcache example doesn't make a lot of sense to me, unless the idea is that a separate memcache instance is deployed alongside each backend, while redis would have been a single instance.

    I'm all for boring technology; reimplementing web protocols and semantics in JS is a disaster - and would probably have made a clearer case study than comparing to memory-first database caches.

  20. This is particularly frustrating to witness in Australia, where we have all but eliminated covid-19, but there is no political will to eliminate it entirely. Had we closed hairdressers, building sites and takeaway restaurants a month ago, we would be looking at zeroes for daily infection.

    While many places in the world cannot yet justify the costs of elimination, a country like Australia, bordered by water, could have reached the goal before fatigue set in. As it is, I am resigned to awful subsequent waves culminating in the so-called herd immunity goal.

  21. Add in the WHO book for children to help with dealing with covid too:

    https://www.who.int/news-room/detail/09-04-2020-children-s-s...

  22. I also believe the referenced article does not correctly convey what the original New Scientist article does: the original epidemiologist does not buy in completely to the new inferences, by my reading.
  23. They claim half the population is already infected. So conduct a random sample, test 50 people and you would expect 25ish MTO be positive, with most showing no symptoms. If so, then herd immunity is indeed a thing.

    I have my doubts that the numbers would support this claim. And if so, then virtually everyone in Spain or Italy would already be a carrier.

    The fact that cases were linked to known arrivals also is evidence against this hypothesis : if a high proportion of carriers were unwitting and asymptomatic you would expect many of those diagnosed to not have a link to someone previously diagnosed.

  24. I just contacted the developer account with the following text. In my mind this has huge potential to help the world find the right level of isolation and potentially leverage powerful adaptive learning algorithms. I don't know why this approach is not getting more mindshare in the current climate:

    Is there a way you can enable other countries to also use this app? Even if it means you can't validate the users mobile telephone number (until that country's government negotiates a means to paying for SMS verification), allowing is to install it would allow collection of contacts to start immediately and raise awareness of this game changer.

    Personally I would allow users to opt in to pushing the collected data online as that allows far more powerful analysis and the ability to continue to refine the algorithms to infer level of risk based on multiple "hops" of contact in the preceding days. For example carrier X overlaps with Y on day 1, then Y overlaps with Z on day 2. As soon as X is diagnosed Y gets a high risk rating and Z gets a moderate one. The exact criteria can evolve as we learn more and the data can be used to help train the model. The more people use it, the better it will be for everyone.

    If you want me to help I can, but either way you could save millions of lives with this app if it is adopted globally.

  25. I suggested something similar but based on using Bluetooth and running an app locally. This would be more sensitive to proximity while potentially giving away slightly less data about exactly where you are, and easier to opt out of, either temporarily (while at home say) or completely.

    There could also be the option of logging nearby Bluetooth addresses locally only and looking up an online database of infected owners, or submitting collected data online to allow aggregation and preemptive notifications of potential exposure before symptoms show

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal