The Architecture of Tinder's API Gateway
The design of Tinder's API gateway. Plus, a study on the Ballmer peak, how to review code as a junior dev, Mistral's new LLM and more.
Hey Everyone!
Today we’ll be talking about
The Architecture of Tinder’s API Gateway
What is an API Gateway and what purpose it serves
The design of Tinder’s API Gateway TAG
How a request flows through TAG and what middleware is involved
Tech Snippets
A Study on the Ballmer Peak
How to Review Code as a Junior Developer
Mistral releases new open source LLM with top performance on coding and math
Strategies to Improve Hiring Quality at your Org
Linux from Scratch
The Architecture of Tinder’s API Gatway
Tinder is the most popular dating app in the world with over 75 million monthly active users in over 190 countries. The app is owned by the Match Group, a conglomerate that also owns Match.com, OkCupid, Hinge and over 40 other dating apps.
Tinder’s backend consists of hundreds of microservices, which talk to each other using a service mesh built with Envoy. Envoy is an open source service proxy, so an Envoy process runs alongside every microservice and the service does all inbound/outbound communication through that process.
For the entry point to their backend, Tinder needed an API gateway. They tried several third party solutions like AWS Gateway, APIgee, Kong and others but none were able to meet all of their needs.
Instead, they built Tinder Application Gateway (TAG), a highly scalable and configurable solution. It’s JVM-based and is built on top of Spring Cloud Gateway.
Tinder Engineering published a great blog post that delves into why they built TAG and how TAG works under the hood.
We’ll be summarizing this post and adding more context.
We’ll talk about concepts like API gateways solutions, service discovery, real world usage and more.
If you want to remember all the concepts we discuss in Quastor, you can download 100+ Anki Flash cards (open source, spaced-repetition cards) on everything we’ve discussed. Thanks for supporting Quastor!
What is an API Gateway
The API Gateway is the “front door” to your application and it sits between your users and all your backend services. When a client sends a request to your backend, it’s sent to your API gateway (it’s a reverse proxy).
The gateway service will handle things like
Authenticating the request and handling Session Management
Checking Authorization (making sure the client is allowed to do whatever he’s requesting)
Rate Limiting
Load balancing
Keeping track of the backend services and routing the request to whichever service handles it (this may involve converting an HTTP request from the client to a gRPC call to the backend service)
Caching (to speed up future requests for the same resource)
Logging
And much more.
The Gateway applies filters and middleware to the request to handle the tasks listed above. Then, it makes calls to the internal backend services to execute the request.
After getting the response, the gateway applies another set of filters (for adding response headers, monitoring, logging, etc.) and replies back to the client phone/tablet/computer.
Tinder’s Prior Challenges with API Gateways
Prior to building TAG, the Tinder team used multiple API Gateway solutions with each application team picking their own service.
Each gateway was built on a different tech stack, so this led to challenges in managing all the different services. It also led to compatibility issues with sharing reusable components across gateways. This had downstream effects with inconsistent use for things like Session Management (managing user sign ins) across APIs.
Therefore, the Tinder team had the goal of finding a solution to bring all these services under one umbrella.
They were looking for something that
Supports easy modification of backend service routes
Allows for Tinder to add custom middleware logic for features like bot detection, schema registry and more
Allows easy Request/Response transformations (adding/modifying headers for the request/response)
The engineering team considered existing solutions like Amazon AWS Gateway, APIgee, Tyk.io, Kong, Express API Gateway and others. However, they couldn’t find one that met all of their needs and easily integrated into their system.
Some of the solutions were not well integrated with Envoy, the service proxy that Tinder uses for their service mesh. Others required too much configuration and a steep learning curve. The team wanted more flexibility to build their own plugins and filters quickly.
Tinder Application Gateway
The Tinder team decided to build their own API Gateway on top of Spring Cloud Gateway, which is part of the Java Spring framework.
Here’s an overview of the architecture of Tinder Application Gateway (TAG)
The components are
Routes - Developers can list their API endpoints in a YAML file. TAG will parse that YAML file and use it to preconfigure all the routes in the API.
Service Discovery - Tinder has a bunch of different microservices in their backend, as they use Envoy to manage the service mesh. The Envoy proxy can be run on every single microservice and it handles the inbound/outbound communications for that microservice. Envoy also has a control plane that manages all these microservices and keeps track of them. TAG uses this Envoy control plane to look for the backend services for each route.
Pre Filters - Filters that you can configure in TAG to be applied on the request before it’s sent to the backend service. You can create filters to do things like modify request headers, convert from HTTP to gRPC, authentication and more.
Post Filters - Filters that can be applied on the response before it’s sent back to the client. You might configure filters to look at any errors (from the backend services) and store them in Elasticsearch, modify response headers and more.
Custom/Global Filters - These Pre and Post filters can be custom or global. Custom filters can be written by application teams if they need their own special logic and are applied at the route level. Global filters are applied to all routes automatically.
Real World Usage of TAG at Tinder
Here’s an example of how TAG handles a request for reverse geo IP lookup (where the IP address of a user is mapped to his country).
The client sends an HTTP Request to Tinder’s backend calling the reverse geo IP lookup route.
A global filter captures the request semantics (IP address, route, User-Agent, etc.) and that data is streamed through Amazon MSK (Amazon Managed Kafka Stream). It can be consumed by applications downstream for things like bot detection, logging, etc.
Another global filter will authenticate the request and handle session management
The path of the request is matched with one of the deployed routes in the API. The path might be /v1/geoip and that path will get matched with one of the routes.
The service discovery module in TAG will use Envoy to look up the backend service for the matched API route.
Once the backend service is identified, the request goes through a chain of pre-filters configured for that route. These filters will handle things like HTTP to gRPC conversion, trimming request headers and more.
The request is sent to the backend service and executed. A backend service will send a response to the API gateway.
The response will go through a chain of post-filters configured for that route. Post filters handle things like checking for any errors and logging them to Elasticsearch, adding/trimming response headers and more.
The final response is returned to the client.
The Match Group owns other apps like Hinge, OkCupid, PlentyOfFish and others. All these brands also use TAG in production.
Tech Snippets
Subscribe to Quastor Pro for long-form articles on concepts in system design and backend engineering.
Past article content includes
System Design Concepts
| Tech Dives
|
When you subscribe, you’ll also get hundreds of Spaced Repetition (Anki) Flashcards for reviewing all the main concepts discussed in prior Quastor articles.