- What this is trying to say: - "peers": participants in the network are peers, i.e. both ends of a connection run the same code, in contrast to a client-and-server architecture, where both sides often run pretty different code. To exemplify: The code GitHub's servers run is very different from the code that your IDE with Git integration runs. - "replicated across peers": the Git objects in the repository, and "social artifacts" like discussions in issues and revisions in patches, is copied to other peers. This copy is kept up to date by doing Git fetches for you in the background. - "in a decentralized manner": Every peer/node in the network gets to locally decide which repositories they intend to replicate, i.e. you can talk to your friends and replicate their cool projects. And when you first initialize a repository, you can decide to make it public (which allows everyone to replicate it), or private (which allows a select list of nodes identified by their public key to replicate). There's no centralized authority which may tell you which repositories to replicate or not.
I do realize that we're trying to pack quite a bit of information in this sentence/tagline. I think it's reasonably well phrased, but for the uninitiated might require some "unpacking" on their end.
If we "lost you" on that tagline, and my explanation or that of hungariantoast (which is correct as well) helped you understand, I would appreciate if you could criticize more constructively and suggest a better way to introduce these features in a similarly dense tagline, or say what else you would think is a meaningful but short explanation of the project. If you don't care to do that, that's okay, but Radicle won't be able to improve just based on "you lost me there".
In case you actually understood the sentence just fine and we "lost you" for some other reason, I would appreciate if you could elaborate on the reason.
- Please check out radicle.dev, helping hands always welcome!
- Check the Wayback Machine. It has archived some of the previous iterations: https://web.archive.org/web/20250000000000*/radicle.xyz
- Comparing to index funds is neat. What Flattr.com offered was to set a monthly budget and then not allocate according to an index but according to your preferences (value per flatter = monthly budget divided by number of flatters).
Flattr is no more. But I could see that work out for open source projects: Allocate a fixed monetary amount per unit of time you want to donate. Record "intent to donate" during that period. This could be done via a browser extension or a CLI. At the end of one period, distribute.
- 161 points
- What I was looking for on the website and think is more than implementing a parser in another language is schema support. That is, you should provide something like XSD for XML, JSONSchema for JSON, TOLS for TOML.
Why? There is a need (see above enumeration) to declaratively specify how an Eno file should look like. I do not want validation to creep into my code, like you do with `document.string('author', required: true)`. This just scares the hell out of me. Say you want to parse some Eno file with different languages, you also end up replicating validation, which mains you end up maintaining it, or rather not maintaining it... Apply leverage by moving validation into your parser.
Another thing is that it appears you are implementing the parsers by hand instead of using a parser generator that consumes a grammar for Eno. What is your reasoning behind this? Is it performance? Did you benchmark using generated parsers (maybe wrapped in a nice API)?
- That depends on (a) how you define your measure for "performance" (see other comments that talk about the difference between microbenchmarking and application benchmarking), (b) the compiler that you use, (c) the WebAssembly implementation that the code finally runs on.
So, do not expect a clear answer to that questions. Probably no one really knows because of a lack of benchmarks and deployments in the wild that have been analyzed. That's why they made the benchmark.
- 2 points
I am asking because if you also have a way to cache all values, this might allow to carry some of Unison's nice properties a little further. Say I implement a compiler in Unison, I end up with an expression that has a free variable, which carries the source code of the program I am compiling.
Now, I could take the hash of the expression, the hash of the term that represents the source code, i.e., what the variable in my compiler binds to, and the hash of the output. Would be very neat for reproducibility, similar to content-addressed derivations in Nix, and extensible to distributed reproducibility like Trustix.
I guess you'll be inclined to say that this is out of scope for your caching, because your caching would only cache results of expressions where all variables are bound (at the top level, evaluating down). And you would be right. But the point is to bridge to the outside of Unison, at runtime, and make this just easy to do with Unison.
Feel free to just point me at material to read, I am completely new to this language and it might be obvious to you...