2
points
radus
Joined 845 karma
- Polars has a much more consistent API, give it a shot.
Regarding your plotting question: use seaborn when you can, but you’ll still need to know matplotlib.
- 1 point
- 9 points
- 2 points
- > But are you really going to repair it?
Yes
- 3 points
- Polars and duckdb interoperate nicely and can enable this flexibility
- I like using EFK (ElasticSearch-Fluent-Kibana) for this
- I guess then you've got to abstract one level further and formulate your advice as an allegory.
- Example: you want to set your local docker context to the production environment, so that when you type `docker system prune --volumes` you delete your production data.
- 1 point
- 1 point
- The answer is “it depends”
- Quick critique: module contains functions with many parameters, many branches, deep nesting, and multiple return points.
- docker swarm is also a decent solution if you do need to distribute some workloads, while still using a docker compose file with a few extra tweaks. I use this to distribute compute intensive jobs across a few servers and it pretty much just works at this scale. The sharp edges I've come across are related to differences between the compose file versions supported by compose and swarm. Swarm continues to use Compose file version 3 which was used by Compose V1 [1].
- They were defending rocks. They had not uncovered evidence of native martian life.
See more discussion here: https://scifi.stackexchange.com/questions/160959/is-or-was-t...
- 2 points
1) https://github.com/radusuciu/snakemake-executor-plugin-aws-b... (my fork). Just add the features to the batch job building code 2) https://github.com/radusuciu/snakemake-executor-plugin-aws-b.... This is more experimental and not yet fully working. I wanted to try a few things. a) can we rely on existing job definitions (managed through IaC instead). b) can we implement a fire-and-forget model where the main snakemake process runs on Batch as well? c) Can we slim down the snakemake container by stripping off unnecessary features.