The system can be divided into multiple components:
Below you can find a diagram of the infrastructure with detailed information.
The network consists of:
Some of this terminology is specific to AWS components, but the general architecture is the same for GCP and Azure customers.
Paragon microservices run within a Kubernetes cluster, an open-source system for automating deployment, scaling, and management of containerized applications.
AWS, Azure, and GCP all provide managed Kubernetes services that offload the majority of system administration work and are the recommended method for deploying Paragon. You can learn more about each one below:
Paragon uses three separate datastores for storing data: Postgres, Redis, and an S3-compliant blob storage.
Postgres is used as the primary datastore. For multi-tenant cloud installations, three separate Postgres servers are provisioned for security and performance. On-premise installations use a single Postgres server for cost savings.
Some of the data stored in Postgres includes:
Redis is used as a cache and a worker queue. For multi-tenant cloud installations, a standalone Redis server and a clustered Redis server is provisioned. On-premise installations use a single Redis standalone Redis server for cost savings.
Some of the data cached in Redis includes:
Files are persisted in an S3-compliant blob storage provider. A custom Minio image has been built that can connect to AWS S3, Azure Blob Storage, or GCP Cloud Storage.
Some of the files stored in the blob storage provider include:
Paragon is built using microservice architecture. The applications are deployed to the Kubernetes cluster using a Helm chart, which is a package manager for Kubernetes.
The applications never export or sync data outside of the installation. At most there are only 3 types of requests that hit Paragon’s cloud infrastructure:
Grafana, Prometheus, and several exporters run in each environment that feeds system metrics into Prometheus. These track hundreds of metrics in the load balancers, databases, worker queues, and microservices and have fine-tuned alerts to notify us of deviations or dangerous values. These metrics are only fed into the real-time dashboards running within the installation and never leave the installation.
You can read more about these in System Metrics and Alarms.
Only managed installations (e.g. cloud, single tenant, and managed on-premise) have system monitors. If you’re running an unmanaged on-premise installation, you’ll need to create your own tooling and processes for monitoring CPU and memory, error rates, etc.
The services running in Paragon are primarily in the private subnets, meaning they aren’t exposed to the public internet. To connect to administrative tools like Grafana or Kibana or interact with the Kubernetes cluster via kubectl
, a bastion server can optionally be deployed to the installation.
In managed on-premise versions, a bastion is deployed and pre-configured with kubectl
, helm
, and other useful CLI tools to interact with the installation.
The system can be divided into multiple components:
Below you can find a diagram of the infrastructure with detailed information.
The network consists of:
Some of this terminology is specific to AWS components, but the general architecture is the same for GCP and Azure customers.
Paragon microservices run within a Kubernetes cluster, an open-source system for automating deployment, scaling, and management of containerized applications.
AWS, Azure, and GCP all provide managed Kubernetes services that offload the majority of system administration work and are the recommended method for deploying Paragon. You can learn more about each one below:
Paragon uses three separate datastores for storing data: Postgres, Redis, and an S3-compliant blob storage.
Postgres is used as the primary datastore. For multi-tenant cloud installations, three separate Postgres servers are provisioned for security and performance. On-premise installations use a single Postgres server for cost savings.
Some of the data stored in Postgres includes:
Redis is used as a cache and a worker queue. For multi-tenant cloud installations, a standalone Redis server and a clustered Redis server is provisioned. On-premise installations use a single Redis standalone Redis server for cost savings.
Some of the data cached in Redis includes:
Files are persisted in an S3-compliant blob storage provider. A custom Minio image has been built that can connect to AWS S3, Azure Blob Storage, or GCP Cloud Storage.
Some of the files stored in the blob storage provider include:
Paragon is built using microservice architecture. The applications are deployed to the Kubernetes cluster using a Helm chart, which is a package manager for Kubernetes.
The applications never export or sync data outside of the installation. At most there are only 3 types of requests that hit Paragon’s cloud infrastructure:
Grafana, Prometheus, and several exporters run in each environment that feeds system metrics into Prometheus. These track hundreds of metrics in the load balancers, databases, worker queues, and microservices and have fine-tuned alerts to notify us of deviations or dangerous values. These metrics are only fed into the real-time dashboards running within the installation and never leave the installation.
You can read more about these in System Metrics and Alarms.
Only managed installations (e.g. cloud, single tenant, and managed on-premise) have system monitors. If you’re running an unmanaged on-premise installation, you’ll need to create your own tooling and processes for monitoring CPU and memory, error rates, etc.
The services running in Paragon are primarily in the private subnets, meaning they aren’t exposed to the public internet. To connect to administrative tools like Grafana or Kibana or interact with the Kubernetes cluster via kubectl
, a bastion server can optionally be deployed to the installation.
In managed on-premise versions, a bastion is deployed and pre-configured with kubectl
, helm
, and other useful CLI tools to interact with the installation.