Not only interoperability but backups, tooling, distributed workflows and everything in between would work consistently and the same way.
That said, I cannot count the times this concept was brought up and tried to make work but despite how much i love the idea in theory, i have yet to see a way it could work in practice.
Some of the issues: - no universal agreement on exact schema, feature set and workflows, do the competing implementations break each other? if its not interoperable why even bother vs just using an external solution
- how to handle issues not associated to one specific repo or to multiple repos, splitting repos etc.
- how to not confuse devs seeing issue branches or wherever the actual data is stored in the repo
- how to best make this usable to non devs
The list goes on
That's the interesting question. Normally a bug tracker would basically be a SQL application. When you move it into a Git repo you lose that and now you have to think about how to represent all that relational data in your repository. It gets annoying. This is why for Fossil it's such a trivial thing to do: Fossil repositories _are_ relational and hosted on an RDBMS (SQLite3 or PG). If you don't have a SQL then referential integrity is easy to break (e.g., issues that refer to others that don't know they're being referred to), and querying your issue database becomes a problem as it gets huge because Git doesn't really have an appropriate index for this.
What one might do to alleviate the relational issues is to just not try to maintain referential integrity but instead suck up the issues from Git into a local SQLite3 DB. Then as long as there are no non-fast-forward pushes to the issues DB it's always easy to catch up and have a functional relational database.
1. Fossil repositories are explicitly not relational, they are however stored in SQLite databases. The data model for everything SCM-relevant (that also includes all content like tickets, wiki, forum) is stored as artifacts in a blob table (+ delta), which references other artifacts by hash value, and that provides the referential integrity. That, and the code that handles it. There are relations (via auxiliary tables) to speed up queries, but these tables are transient, get updated by inserting new artifacts, and can be regenerated from the artifacts.
(Users and their metadata, and configuration is not part of this scheme, so these tables might be viewed as relational tables. They are local-only; and not synced.)
See https://fossil-scm.org/home/doc/trunk/www/fossil-is-not-rela... and https://fossil-scm.org/home/doc/tip/www/theory1.wiki for more details.
2. There are no other databases like PostgreSQL to choose from.
I thought that Fossil did or at least aimed to support different SQL RDBMSes. Did that go away at some point?
That being said, it's the only existent implementation and none other are planned. If you'd like to port it to PostgreSQL, go for it, but it probably would lose a lot of the appeal of being a distributed version control system.
> just not try to maintain referential integrity but instead suck up the issues from Git into a local SQLite3 DB
Applications with SQL backends tackle the problem “how can we express our CRUD datastructures so they fit the relational model?
When you replace SQL with a file-based serialised format, you mainly lose ACID. This is arguably easier than SQL.
You are a single user locally: as long as your IDE plugin doesn’t fight with your terminal client, your need for ACID are low: you handle conflicts like git conflicts, but you may apply domain-specific conflict resolution; while conflicts in source code has no general resolution strategy, merging issue comments can be easy; changing issue status when people agree can be automatically resolved; handling conflicts manually can be tool-assisted.
So losing SQL is not that big of a deal, assuming you’re in a highly decentralised, highly async environment.
The answer to “Why though?” should be forge interop, keeping knowledge in-repo, and because it may fit the organisation (or lack of), e.g. just like you can commit code changes offline, you’re not prevented from updating issues when you’re offline.
I addressed that in a different sub-thread. This one was about "why would you do it any other way?". My commentary about relational data in non-relational media was specifically about why do it any other way. There are great advantages to doing what TFA does -and I acknowledged that in a different sub-thread- but it's still interesting to consider what gets lost, and how to get some of it back.
The point here is to be able to work with issues, PRs, and wikis offline just as one is now used to doing with code. And to use the same underlying content-addressed version control tooling for all those things.