Why we built Nango: We built Nango to solve the pain of accessing OAuth APIs. Despite OAuth being a standard protocol in theory, it remains a major burden to implement it, even with the help of a library. You still need to add endpoints and logic to your app for the server-side dance, implement token refreshes, build & secure your token storage, deal with redirects on the frontend etc.
But the worst part is that (almost) every API has quirks and non-standard behaviour. This is why we think open source and knowledge sharing is key here: With the templates in Nango we capture these edge cases and make sure that OAuth just works.
How it works: Nango is a small Typescript/Node.js service that handles the OAuth dance, token storage & refreshes for you. It works with any language, API or framework. It is easily self-hostable for free, or as a cloud service if you want to avoid the burden of securing tokens yourself (that’s how we pay the bills). To get started we recommend you take a look at our Quickstart on the GitHub repo: https://github.com/NangoHQ/nango
We currently support 40+ popular APIs. Adding a new one is as simple as updating a YAML file, so anybody can contribute one. In the coming weeks we plan to add a dashboard, a proxy to authorise requests, monitoring and more APIs.
One thing we learned from talking to other engineers about OAuth is that everybody has their own horror stories: What was the hardest OAuth API you ever used? What made it so difficult? We look forward to your stories and your feedback on Nango!
Repo: https://github.com/NangoHQ/nango // Website: https://www.nango.dev
Also, last time I checked out pizzly I do not remember seeing any of the "sync" type functionality and the fact that you've added that recently with Temporal is awesome!
You guys are awesome for putting open-source as a high priority!
Is nango going to have the proxy server functionality? (Also, would be really cool if you had a way to deploy the proxy to "the edge" via cloudflare/deno-deploy/fly.io!)
Best of luck to the nango team!
Yes we are thinking about bringing the proxy back as well. Probably with some added features, such as rate-limit handling and automatic retries (powered by Temporal).
The proxy edge deployment idea is interesting. Would you be calling it directly from your frontend/mobile code? We were thinking most people would want to have the proxy as close to their backend as possible, but maybe we are missing something?
Yea you're probably right, most applications would best fit the proxy living close to the backend.
One reason to run on the edge would be if the edge "worker" could retrieve the required token(s) from an edge db and attach it as a header to the request going through to the BE app server so that it can then immediately make the requests directly to the 3rd party api. (Though this likely only simplifies the design for the devs a bit and performance is nearly identical)
Another reason, likely less common use case, is if you could keep certain data from 3rd party api's always fresh/cached at the edge. So for instance Cloudflare worker KV store has an api to update records (perhaps from a normal nango server instance that's maintaining updating/syncing the records) so that this data can be "injected" as json into the body of an html response. This is definitely a niche use case though, lol.
Congrats on YC and the launch!
Do you have some high-level overview of how it all fits together technically?
IIUC the tokens are stored in a backend service (available on GitHub)? Are they encrypted? How does the frontend SDK communicate with the backend, is there some OAuth flow first to the backend service, to get a user-specific key, which lets you store subsequent tokens?
At a glance: Nango's frontend SDK only handles redirects for the OAuth flow, the Nango server actually gets called by the OAuth provider (using a callback URL). That's when the token exchange happens. Tokens are stored in a Postgres (by default we create the Postgres, but you can easily connect your own).
Before triggering the OAuth flow for an end-user, you indeed assign it a unique user-specific key, so that you can retrieve this user's token later on!
This is the BFF pattern, where the browser/native client that needs, say, GitHub or Google Calendar data, has to go through Nango to make those requests. (More here: https://docs.nango.dev/reference/guide#node-sdk )
That works well for a large class of problems, but I've seen some architectures where the token is stored client side, so that you don't have to worry about the proxy (Nango in this case) being a chokepoint. YMMV, but I thought it was worth calling out.
I do wish it supported encrypted storage. For example, I wrote/maintain a Vault plugin to do basically the same work as the backend side of this project[0]. I wonder if you would be interested in supporting Vault as a backend in addition to PostgreSQL down the line? Feel free to reach out if so.
To answer your question:
Like some others here, I haven't found the actual integration points to be terribly difficult with most OAuth 2 servers. Once you have a token, you can call their APIs. No problem. I wrote the Vault plugin I referenced above to basically just do automatic refreshes without ever exposing client secrets/refresh tokens to our services, and it works fine.
Rather our customers would get into situations where they inadvertently revoked access, the user that authorized the integration initially left the company and it was automatically disabled, etc. and there was no notification that it happened. Basically all of the lifecycle management side that couldn't be automated down to "refresh my token when it's about to expire" sucked. So anything you're looking to support there would be a huge value-add IMO.
Another one is that each provider has their own scope definitions/mapping to their APIs. Some scopes subsume others (e.g. GitHub has all repos, public repos, org admin, org read-only, etc.). Some get deprecated and need to be replaced with others on the next auth attempt. We could never keep them up to date because they were usually just part of docs, not enumerated through some API somewhere. If you had a way to provide the user with a way to see and select those scopes in advance, that would be huge. Think if my app or a user could answer the question "I want to call this API endpoint, what scopes do I need?" by just asking your service to figure it out.
[0]: https://github.com/puppetlabs/vault-plugin-secrets-oauthapp
"Rather our customers would get into situations where they inadvertently revoked access, the user that authorized the integration initially left the company and it was automatically disabled, etc. and there was no notification that it happened. Basically all of the lifecycle management side that couldn't be automated down to "refresh my token when it's about to expire" sucked. So anything you're looking to support there would be a huge value-add IMO."
I can definitely see value in notifying that API access has been revoked. If you think of any other case you'd like covered, I am interested!
> But the worst part is that (almost) every API has quirks and non-standard behaviour.
This is my long-standing pet-peeve with identity providers (idP). Everyone goes ahead and does their own thing, especially when it comes fetching user profiles (by having APIs in addition to a standard OAuth userinfo endpoint) and performing "federated logout" (you want to terminate the session on the idP's domain).
Does Nango relieve us of the burden in these areas? Thanks.
The part we focus on (for now) is getting access to all endpoints of external APIs, on behalf of users, so you can enrich your product with integrations.
This involves 1. getting users to login to external systems, from inside your app 2. storing/refreshing access tokens so you can access all endpoints of external APIs
This means we're less focus on the Single Sign-On use-case for now!
I don't want to spin up a droplet or Linode only to have the tokens stored unencrypted and in plain text in, IIUC, the client? I understand the need to keep things tight as a small OSS startup but my project is a one dev show on a small scale.
I want to check this out but having to self-host with very real security risks is a no go for me.
Or does the backend do some heavy lifting I'm missing?
We made this doc when we were focusing on syncing data, which was much more intensive.
Though in the future, we plan to offer a proxy that would funnel your external requests and would require more processing power. The advantages of a proxy are that we can automatically authenticate requests, handle retries & rate limits, monitor & alert, etc.
With these kinds of integrations we haven't gotten requests for "Sign in with Apple" yet (though we are open to that).
Make it suitable for all integration authentication possibilities.
Or am I missing something?
Would be great to hear more about your use case for this, feel free to message me on our community or on robin (at) nango (dot) dev
Enterprisey people just don't care about minimalism on documentation and stable API design.
I don't quite understand how all the fuss around different OAuth implementations is justified, especially not why you would need a separate library or SaaS for that. Implementing the client_credentials dance is like, get a token, cache token+refresh until expiration time, use until expired, exchange the refresh token, back to start. I mean yes, some providers want an extra parameter or have a strange token URL, but once you've grokked the general concept, it usually makes sense.
[0]: https://writing.kemitchell.com/2019/05/05/Rely-on-OSI.html
[1]: https://writing.kemitchell.com/2021/03/18/You-Can-Still-Use-...
Language is useful.
Trademark or no, enough people were on-board with the OSI and OSD at the time it became a well known and established term that we have a claim to it.
touch .well-known/open-source
Also the institution has proven to be stronger than one individual.
https://www.theregister.com/2020/01/03/osi_cofounder_resigns...?
Edit; come to think of it; make it a ‘contribute GPL3 or MIT license’ (not sure if something like it exists); you can use it for anything, however, if you offer it as a hosted product to your clients (aws etc), you must contribute x.y% of your staff FTE to it. GitHub/Gitlab can do the KYC per employee for which the company has to pay.
that is a big difference
You are right about in-house, but the lawyers won’t care so much becuase ‘in house’ is not so clear. In house cannot include clients or partners using it? They pay and we host so seems to violate. Pretty useless for most businesses. And it’s not dual licensed, so I cannot contribute or pay to change this outcome. For me, it’s just the same as a closed source saas product. That’s fine, I just find it strange that people would let Amazon etc bully people into crap licenses while they could just use MIT-but-not-for-Amazon.
To keep the setup of the Nango open source version simple we have made some choices that may not be ideal for production. Please take them into consideration before using it in production:
The database is bundled in the docker container with transient storage. This means that updating the Docker image causes configs/credentials loss. We recommend that you connect Nango to a production DB that lives outside the docker setup to mitigate this.
Credentials are not encrypted at rest and stored in plain text
No authentication by default
No SSL setup by default
The setup is not optimized for scaling
Updating the provider templates requires an update of the docker containers
LOL. Advertising something as a security product and doing this is absolutely ridiculous.
However, it seems reasonable to infer they are referring to storage in the DB itself. How would you suggest they encrypt it securely at rest for an out-of-the-box "setup and go" scenario?
I will note that a lot of the existing "plugin" OAuth type providers for popular web frameworks don't encrypt user tokens in the DB by default either.
Interesting point on existing providers not encrypting user tokens in the DB by default tho - wouldn't expect that hmmm