Get Front-End Test Coverage with Cypress

@LiteSun, Apache APISIX Committer from Shenzhen Zhiliu Technology Co.

Source:

Background#

In the article "Stable Product Delivery with Cypress", we discussed why we chose Cypress as our E2E testing framework. After spending nearly two months refining the test cases, we needed test coverage to quantify whether the test coverage was sufficient.This article will describe how to get APISIX Dashboard front-end E2E coverage using Cypress.

What is code coverage?#

Code coverage is a metric in software testing that describes the proportion and extent to which the source code in a program is tested, and the resulting proportion is called code coverage. Test code coverage reflects the health of the code to a certain extent.

Installation Dependencies & Configuration#

To collect test coverage data, we need to put some probes in the original business code for Cypress to collect the data.

Cypress officially recommends two approaches, the first is to generate a temporary directory via nyc and run the code that has been written to the probe to collect test coverage data. The second way is to do the code conversion in real time through the code conversion pipeline, which eliminates the hassle of temporary folders and makes collecting test coverage data relatively refreshing. We choose the second way to collect front-end E2E coverage.

  1. Installing Dependencies
yarn add babel-plugin-istanbul --dev
  1. Install the cypress plug-in
yarn add @cypress/code-coverage --dev
  1. Configuring babel
// web/config/config.ts
extraBabelPlugins: [
['babel-plugin-istanbul', {
"exclude": ["**/.umi", "**/locales"]
}],
],
  1. Configuring Cypress code coverage plugin
// web/cypress/plugins/index.js
module.exports = (on, config) => {
require('@cypress/code-coverage/task')(on, config);
return config;
};
// web/cypress/support/index.js
import '@cypress/code-coverage/support';
  1. Get Test Coverage

After the configuration is done, we need to run the test case. After the test case is run, Cypress will generate coverage and .nyc_output folders, which contain the test coverage reports.

1.png

The test coverage information will appear in the console after executing the following command.

npx nyc report --reporter=text-summary

2.png

Under the coverage directory, a more detailed report page will be available, as shown here.

3.png

  • Statements indicates whether each statement was executed

  • Branchs indicates whether each if block was executed

  • Functions indicates whether each function is called

  • Lines indicates whether each line was executed

Summary#

The test coverage rate reflects the quality of the project to a certain extent. At present, APISIX Dashboard front-end E2E coverage rate has reached 71.57%. We will continue to work with the community to enhance the test coverage rate and provide more reliable and stable products for users.

Install Apache APISIX from Helm Charts

@tokers, Apache APISIX Committer from Shenzhen Zhiliu Technology Co.

Source:

A few days ago, Zhiliu Inc released an online Helm Charts repository. Users can easily install Apache APISIX, Apache apisix-dashboard and Apache apisix-ingress-controller from it (rather than cloning the corresponding project in advance).

How To Use#

Just a few steps to install Apache APISIX

  1. Add the repository and fetch the update

    $ helm repo add apisix https://charts.apiseven.com
    $ helm repo update
  2. Check out the available charts in repository

    $ helm search repo apisix
    NAME CHART VERSION APP VERSION DESCRIPTION
    apisix/apisix 0.1.2 2.1.0 A Helm chart for Apache APISIX
    apisix/apisix-dashboard 0.1.0 2.3.0 A Helm chart for Apache APISIX Dashboard
  3. Install Apache APISIX to your Kubernetes cluster

    $ helm install apisix-gw apisix/apisix --namespace default
    NAME: apisix-gw
    LAST DEPLOYED: Fri Feb 19 11:34:14 2021
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    1. Get the application URL by running these commands:
    export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services apisix-gw-gateway)
    export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
    echo http://$NODE_IP:$NODE_PORT

See Also#

Stable Product Delivery with Cypress

@LiteSun, Apache APISIX Committer from Shenzhen Zhiliu Technology Co.

Source:

Background#

The Apache APISIX Dashboard is designed to make it as easy as possible for users to operate Apache APISIX through a front-end interface, and since the project's inception, there have been 552 commits and 10 releases. With such rapid product iteration, it is important to ensure the quality of the open-source product. For this reason, we have introduced an E2E testing module to ensure stable product delivery.

What is Front-End E2E?#

E2E, which stands for "End to End", can be translated as "end-to-end" testing. It mimics user behavior, starting with an entry point and executing actions step-by-step until a job is completed. Sound testing prevents code changes from breaking the original logic.

Why Cypress#

We used Taiko, Puppeteer, TestCafe, and Cypress to write test cases for creating routes during the selection research period, and we used each testing framework to write cases to experience their respective features.

Taiko is characterized by smart selector, which can intelligently locate the elements that you want to operate based on text content and location relations, and has a low start-up cost, so you can finish the test cases quickly. However, it is not user-friendly when writing test cases. When the user exits the terminal by mistake, all the written test cases are lost, and if you want to run a complete test case, you need to use it together with other test runners, which undoubtedly increases the learning cost for the user.

Puppeteer has the best performance. However, testing is not the focus of Puppeteer. It is widely used for web crawlers. Our project started with Puppeteer, the official E2E testing framework recommended by ANTD, and after using it for a while we found that Puppeteer did not look so friendly to non-front-end developers and it was hard to get other users involved. When users write test cases, the lack of intelligent element positioning makes the learning curve very high.

TestCafe is surprisingly easy to install, it has a built-in waiting mechanism so that users don't have to actively sleep waiting for page interactions, and it supports concurrent multi-browser testing, which is helpful for multi-browser compatibility testing. The disadvantage is that its debugging process is not so user-friendly, and you have to run a new use case after each test case change. For the developers, they need to have some basic Javascript syntax. Secondly, its running speed is relatively slow for several other frameworks, especially when executing withText () to find elements.

After a comprehensive comparison, we finally chose Cypress as our front-end E2E framework, listing four main reasons:

  1. Simple syntax

The syntax used in Cypress tests is very simple and easy to read and write. With a little practice, you can master creating test cases, which is important for open source projects because it allows the community interested in E2E test cases to participate in writing test cases with minimal learning cost.

  1. Easy debugging

When debugging test cases, we can use Cypress's Test Runner, which presents multi-dimensional data that allows us to quickly pinpoint the problem.

  • Showing the status of the test case execution, including the number of successes, failures, and runs in progress.
  • Displaying the total time spent on the execution of the entire test set.
  • A built-in Selector Playground to help locate elements.
  • shows each step of execution for each use case and forms a snapshot that can show information about each execution step after it is completed.
  1. Active community

Cypress has a large community of users, and there are always many people inside the community sharing their experiences and ideas.

This is helpful when encountering problems, and you are likely to encounter problems that others have encountered before. Also, when new features are requested, we can participate in the community by discussing and adding features to Cypress that we want to add, just like we do in the APISIX community: listening to the community and feeding it back.

  1. Clear documentation

Cypress's documentation structure is clearer and more comprehensive. In the early stages of use, we were able to quickly introduce Cypress into our project and write our first case based on the official documentation guidelines. In addition, there is a large amount of documentation available on the documentation site that gives users good guidance on what is best practice.

Cypress and APISIX Dashboard#

There are currently 49 test cases written for the APISIX Dashboard. We configured the corresponding CI in GitHub Action to ensure that the code passes before each merge to ensure code quality. We share the use of Cypress in APISIX Dashboard with you by referring to Cypress best practices and combining them with our project.

image

image

  1. Commonly used functions are encapsulated into commands.

Take login as an example, login is an essential part of entering the system, so we encapsulate it as a command, so that the login command can be called before each case run.

Cypress.Commands.add("login", () => {
cy.request(
"POST",
'http://127.0.0.1/apisix/admin/user/login',
{
username: "user",
password: "user",
}
).then((res) => {
expect(res.body.code).to.equal(0);
localStorage.setItem("token", res.body.data.token);
});
});
beforeEach(() => {
// init login
cy.login();
})
  1. Extract the selector and data as public variables.

To make it more intuitive for the user to understand the meaning of the test code, we extract a selector and data as public variables.

  const data = {
    name: 'hmac-auth',
    deleteSuccess: 'Delete Plugin Successfully',
  };
  const domSelector = {
    tableCell: '.ant-table-cell',
    empty: '.ant-empty-normal',
    refresh: '.anticon-reload',
    codemirror: '.CodeMirror',
    switch: '#disable',
    deleteBtn: '.ant-btn-dangerous',
  };
  1. Remove cy.wait(someTime)

We used cy.wait(someTime) in the early days of Cypress, but found that cy.wait(someTime) relies too much on the network environment and the performance of the test machine, which can cause test cases to report errors when the network environment or machine performance is poor. The recommended practice is to use it in conjunction with cy.intercept() to explicitly specify the network resources to wait for.

cy.intercept("https://apisix.apache.org/").as("fetchURL");
cy.wait("@fetchURL");

Summary#

At present, APISIX Dashboard has written 49 test cases. In the future, we will continue to enhance the front-end E2E coverage, and require the community to agree to write test cases for each new feature or bugfix submission to ensure the stability of the product.

Welcome to join us to polish the world-class gateway product.

Project address: https://github.com/apache/apisix-dashboard

Run Ingress APISIX on Amazon EKS

@Chao Zhang, Apache APISIX Committer from Shenzhen Zhiliu Technology Co.

Source:

This post is based on Install Ingress APISIX on Amazon EKS.

Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. This article explains how to run Ingress APISIX on it.

Ingress APISIX brings good features (traffic splitting, multiple protocols, authentication and etc) of Apache APISIX to Kubernetes, with a well-designed Controller component to drive it, which helps users to achieve complex demands for the north-south traffic.

Prerequisites#

Before you go ahead, make sure you have an available EKS cluster on Amazon AWS. If you don't have one, please create it according to the guide.

You shall have kubectl tool in your own environment, set the context to your EKS cluster by running:

aws eks update-kubeconfig --name <your eks cluster name> --region <your region>

After the Kubernetes cluster is ready, creating the namespace ingress-apisix, all subsequent resources will be created at this namespace.

kubectl create namespace ingress-apisix

We use Helm to deploy all components in Ingress APISIX (Apache APISIX and apisix-ingress-controller), so please also install Helm according to its installation guide. The helm charts for Apache APISIX and apisix-ingress-controller are in apache/apisix-helm-chart and apache/apisix-ingress-controller, clone them to get the charts.

Install Apache APISIX#

Apache APISIX as the proxy plane of apisix-ingress-controller, should be deployed in advance.

cd /path/to/apisix-helm-chart
helm repo add bitnami https://charts.bitnami.com/bitnami
helm dependency update ./chart/apisix
helm install apisix ./chart/apisix \
--set gateway.type=LoadBalancer \
--set allow.ipList="{0.0.0.0/0}" \
--namespace ingress-apisix
kubectl get service --namespace ingress-apisix

The above commands created two Kubernetes Service resources, one is apisix-gateway, which processes the real traffic; another is apisix-admin, which acts as the control plane to process all the configuration changes. Here we created the apisix-gateway as a LoadBalancer type Service, which resorts the AWS Network Balancer to expose it to the Internet. You can find the load balancer hostname by the following command:

kubectl get service apisix-gateway \
--namespace ingress-apisix \
-o jsonpath='{.status.loadBalancer.ingress[].hostname}'

Another thing should be concerned that the allow.ipList field should be customized according to the EKS CIDR Ranges in your EKS cluster, so that the apisix-ingress-controller can be authorized by Apache APISIX (for the resources pushing).

See value.yaml](https://github.com/apache/apisix-helm-chart/blob/master/chart/apisix/values.yaml) to learn all the configuration items if you have other requirements.

Install apisix-ingress-controller#

After Apache APISIX is deployed successfully, now it's time to install the controller component.

cd /path/to/apisix-ingress-controller
# install base resources, e.g. ServiceAccount.
helm install ingress-apisix-base -n ingress-apisix ./charts/base
# install apisix-ingress-controller
helm install ingress-apisix ./charts/ingress-apisix \
--set ingressController.image.tag=dev \
--set ingressController.config.apisix.baseURL=http://apisix-admin:9180/apisix/admin \
--set ingressController.config.apisix.adminKey={YOUR ADMIN KEY} \
--namespace ingress-apisix

The ingress-apisix-base chart installed some basic dependencies for apisix-ingress-controller, such as ServiceAccount, its exclusive CRDs and etc.

The ingress-apisix chart guides us how to install the controller itself, you can change the image tag to the desired release version, also the value of ingressController.config.apisix.adminKey in above mentioned commands should be filled according to your practical usage (and be sure the admin key is same as the on in Apache APISIX deployment). See value.yaml to learn all the configuration items if you have other requirements.

Now try to open your EKS console, choosing your cluster and clicking the Workloads tag, you shall see all pods of Apache APISIX, etcd and apisix-ingress-controller are ready.

Test#

Now we have deployed all components in Ingress APISIX, it's important to check whether it runs well. We will deploy a httpbin service and ask Apache APISIX to route all requests with Host "local.httpbin.org" to it.

The first step we should do is created the httpbin workload and expose it.

kubectl run httpbin --image kennethreitz/httpbin --port 80
kubectl expose pod httpbin --port 80

In order to let Apache APISIX routes requests correctly, we need create an ApisixRoute resource to drive it.

# ar-httpbin.yaml
apiVersion: apisix.apache.org/v1
kind: ApisixRoute
metadata:
name: httpserver-route
spec:
rules:
- host: local.httpbin.org
http:
paths:
- backend:
serviceName: httpbin
servicePort: 80
path: /*

The above ApisixRoute resource asks Apache APISIX to route requests which Host header is "local.httpbin.org" to the httpbin backend (the one we just created).

Now try to apply it, note the service and the ApisixRoute resource should be put in the same namespace., crossing namespaces is not allowed in apisix-ingress-controller.

kubectl apply -f ar-httpbin.yaml

Test it by a simple curl call from a place where the Apache APISIX service is reachable.

$ curl http://{apisix-gateway-ip}:{apisix-gateway-port}/headers -s -H 'Host: local.httpbin.org'
{
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "curl/7.64.1",
"X-Amzn-Trace-Id": "Root=1-5ffc3273-2928e0844e19c9810d1bbd8a"
}
}

If the Service type is ClusterIP, you have to login to a pod in the EKS cluster, then accessing Apache APISIX with its ClusterIP or Service FQDN. If it was exposed (no matter NodePort or LoadBalancer), just accessing its outside reachable endpoint.

See Also#

初探 Kubernetes Service APIs

@gxthrj, Apache APISIX PMC & Apache apisix-ingress-controller Founder from Shenzhen Zhiliu Technology Co.

Source:

前言#

笔者是 Apache APISIX PMC,也是 Apache APISIX Ingress Controller Founder,通过调研和社区交流,打算在 Apache APISIX Ingress Controller 的后期版本中逐步支持 Kubernetes Service APIs.

我们知道 Kubernetes 为了将集群内部服务暴露出去,有多种方案实现,其中一个比较受大众推崇的就是 Ingress。Ingress 作为一种对外暴露服务的标准,有相当多的第三方实现,每种实现都有各自的技术栈 和 所依赖的网关的影子,相互之间并不兼容。

为了统一各种 Ingress 的实现,便于 Kubernetes 上统一管理,SIG-NETWORK 社区推出了Kubernetes Service APIs 一套标准实现,称为第二代 Ingress 。

主题描述#

本文从几个问题入手,对 Kubernetes Service APIs 的基本概念进行介绍。

介绍#

Kubernetes Service APIs 号称第二代 Ingress 技术,到底在哪些方面优于第一代?#

Kubernetes Service APIs 设计之初,目标并没有局限在 Ingress, 而是为了增强 service networking,着重通过以下几点来增强:表达性、扩展性、RBAC。

  1. 更强的表达能力,例如 可以根据 header 、weighting 来管理流量
kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
...
matches:
- path:
value: "/foo"
headers:
values:
version: "2"
- path:
value: "/v2/foo"
  1. 增强了扩展能力,Service APIs 提出多层 API 的概念,各层独立暴露接口,方便其他自定义资源与 API 对接,做到更细粒度(API 粒度)的控制。

api-model

  1. 面向角色 RBAC:多层 API 的实现,其中一个思想就是从使用者的角度去设计资源对象。这些资源最终会与 Kubernetes 上运行应用程序的常见角色进行映射。

Kubernetes Service APIs 抽象出了哪些资源对象?#

Kubernetes Service APIs 基于使用者角色,将定义了以下几种资源:

GatewayClass, Gateway, Route

  1. GatewayClass 定义了一组具有通用配置和行为的网关类型
  • 与 Gateway 的关系,类似 ingress 中的 ingress.class annotation;

  • GatewayClass 定义了一组共享相同配置和行为的网关。每个 GatewayClass 将由单个 controller 处理,controller 与 GatewayClass 是一对多的关系;

  • GatewayClass 是 cluster 资源。必须至少定义一个 GatewayClass 才能具有功能网关。

  1. Gateway 请求一个可以将流量转换为群集内服务的点。
  • 作用:把集群外的流量引入集群内部。这个才是真正的 ingress 实体;

  • 它定义了对特定 LB 配置的请求,该配置也是 GatewayClass 的配置和行为的实现;

  • Gateway 资源可以由操作员直接创建,也可以由处理 GatewayClass 的 controller 创建;

  • Gateway 与 Route 是多对多的关系;

  1. Route 描述了通过网关的流量如何映射到服务。

schema-uml

另外,Kubernetes Service APIs 为了能够灵活的配置后端服务,特地定义了一个 BackendPolicy 资源对象。

通过 BackendPolicy 对象,可以配置 TLS、健康检查 以及指定后端服务类型,比如 service 还是 pod。

Kubernetes Service APIs 的推行会带来哪些改变?#

Kubernetes Service APIs 作为一种实现标准,带来了以下改变:

  1. 通用性: 可以有多种实现,就像 ingress 有多种实现一样,可以根据网关的特点去自定义 ingress controller,但是他们都有一致的配置结构。一种数据结构,可以配置多种 ingress controller。

  2. Class 概念:GatewayClasses 可以配置不同负载均衡实现的类型。这些类 class 使用户可以轻松而明确地了解哪些功能可以用作资源模型本身。

  3. 共享网关:通过允许独立的路由资源 HTTPRoute 绑定到同一个 GatewayClass,它们可以共享负载平衡器和 VIP。按照使用者分层,这使得团队可以安全地共享基础结构,而无需关心下层 Gateway 的具体实现。

  4. 带类型的后端引用: 使用带类型的后端引用,路由可以引用 Kubernetes Services,也可以引用任何类型的设计为网关后端的 Kubernetes 资源,比如 pod,又或者是 statefulset 比如 DB, 甚至是可访问的集群外部资源。

  5. 跨命名空间引用:跨不同命名空间的路由可以绑定到 Gateway。允许跨命名空间的互相访问。同时也可以限制某个 Gateway 下的 Route 可以访问的命名空间范围。

目前有哪些 ingress 实现了 Kubernetes Service APIs ?#

目前已知的从代码层面能看到对 Kubernetes Service APIs 资源对象支持的 Ingress 有 Contour, ingress-gce。

Kubernetes Service APIs 如何管理资源读写权限?#

Kubernetes Service APIs 按照使用者的维度分为 3 个角色

  1. 基础设施提供方 GatewayClass

  2. 集群操作人员 Gateway

  3. 应用开发者 Route

RBAC(基于角色的访问控制)是用于 Kubernetes 授权的标准。允许用户配置谁可以对特定范围内的资源执行操作。 RBAC 可用于启用上面定义的每个角色。

在大多数情况下,希望所有角色都可以读取所有资源

三层模型的写权限如下。

GatewayClassGatewayRoute
Infrastructure ProviderYesYesYes
Cluster OperatorsNoYesYes
Application DevelopersNoNoYes

Kubernetes Service APIs 有哪些扩展点?#

网关的需求非常丰富,同一个场景实现的方式多种多样,各有利弊。Kubernetes Service APIs 提炼出 多层 资源对象,同时也预留了一些扩展点。

目前 Kubernetes Service APIs 的扩展点基本集中在 Route 上:

  • RouteMatch 可以扩展 Route 匹配规则。

  • specify Backend 可以扩展特定类型的 后端服务, 除了上面提到的 Kubernetes 资源外,还可以扩展比如 文件系统,函数表达式等。

  • Route filter 可以在 Route 的生命周期中增加扩展点,处理 request/response 。

  • Custom Route 以上扩展点都不能满足时,可以完全自定义一个 Route。

总结#

本文通过提问的方式,对 Kubernetes Service APIs 做了一些基本介绍,从整体来看,Kubernetes Service APIs 提炼了很多 ingress 的最佳实践,比如表达能力的增强,其实就是扩展了 Route 的能力,再比如 BackendPolicy 对象,可以为 upstream 指定几乎所有的 Kubernetes 后端资源。当然,项目初期也有不足的地方,目前 Kubernetes Service APIs 虽然已经从大的层面上规定了资源对象,但资源对象内部还有不少细节需要讨论之后再确定,以防止可能出现的冲突场景,结构上存在一定变数。


参考:

Envoy and Apache APISIX: Another way to implement the Envoy filter

@nic-chen, Apache APISIX PMC from Shenzhen Zhiliu Technology Co.

Source: https://www.apiseven.com/en/blog/another-way-to-implement-envoy-filter

Ways to implement Envoy filter#

Envoy filter#

Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures. A pluggable filter chain mechanism allows filters to be written to perform different tasks and inserted into the main server.

Envoy filter

Expansion method#

The existing filters may not meet the user's custom requirements. In this case, Envoy needs to be extended. Customize new filters according to the existing filter chain to achieve customization requirements.

Developers can extend Envoy in three ways:

Getting Started difficultystabilitydevelopment efficiencyDeploy and compile
C++highstablelowlong time to compile
LualowstableHighno need to compile, deploy directly
WASMhigh-mediumon the fencedepends on languagecompilation time depends on language
  1. Using C++ to extend

In this way, C++ code is written directly on the basis of Envoy for functional enhancement. After implementing a custom filter, the new binary file is recompiled to complete the upgrade. There are two problems with this way:

  • Limited by the C++ language, difficulty of getting started, scarcity of developers.

  • Increasing the complexity of deployment, operation and maintenance, and upgrades. Envoy will become heavier and heavier, and every change requires recompiling the binary file, which is not conducive to iteration and management.

  1. Using Lua to extend

Lua is born to be embedded in the application, so as to provide flexible extension and customization features for application, and is widely used.

Lua Filter allows Lua scripts to be run in the request and response process. The main features currently supported include: inspection of headers, body, and trailers while streaming in either the request flow, response flow;modification of headers and trailers;blocking and buffering the full request/response body for inspection;performing an outbound async HTTP call to an upstream host;performing a direct response and skipping further filter iteration, etc.

At present, many people directly distribute Lua code in configuration, which is not conducive to code organization and management, and it is difficult to share with others to form an ecosystem.

  1. Using WASM extension

Developers can write filters in their own programming language, compile them into WASM format using tools, and embed them into Envoy to run.

It currently supports few languages, and using these languages ​​to extend is still not that simple. On the other hand, many people still have reservations about WASM and may not directly use it.

Apache APISIX solution#

Based on the above analysis, we could see that Lua is very suitable for extending Envoy, and it is easy to learn, and the development efficiency is extremely high. Because it is embedded in Envoy, there is no additional network overhead, and the performance is good.

Apache APISIX community proposes its own solution based on Lua, which is to provide a powerful and flexible basic library to implement all plugins of Apache APISIX and plugins that will be developed in the future to run on Envoy. Developers could also develop their own customized plugins based on this basic library.

Apache APISIX is a dynamic, real-time, high-performance API gateway, based on the Nginx library and Lua. Apache APISIX provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.

Example#

Please check the repo for specific code and how to run: https://github.com/api7/envoy-apisix

The relevant configuration of Envoy is as follows:

Define a Filter:

http_filters:
- name: entry.lua
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
source_codes:
entry.lua:
filename: /apisix/entry.lua

Enable the Filter for a route and configure it with metadata:

routes:
- match:
prefix: "/foo"
route:
cluster: web_service
typed_per_filter_config:
envoy.filters.http.lua:
"@type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.LuaPerRoute
name: entry.lua
metadata:
filter_metadata:
envoy.filters.http.lua:
plugins:
- name: uri-blocker
conf:
rejected_code: 403
block_rules:
- root.exe
- root.m+

How does it work#

We don't need to make major changes to Envoy, only some optimizations that are suitable for public needs.

We shield platform differences for the plugin layer. All interfaces that need to be used are abstracted in the underlying framework, which we call apisix.core, so that all plugins can run on Envoy and Apache APISIX at the same time.

Architecture diagram

We use the previous example to show how the plugin runs:

Plugin workflow

First step, read configuration#

We configure through metadata to determine what plugins need to run on each route and what configuration is for each plugin. In the example, we configured plugin uri-blocker for the route whose prefix is ​​/foo, as well as the block rule of the plugin and the response status when a block is required.

Second step, parse request#

We encapsulated the client request data into ctx so that it can be used directly in the entire process.

Third step, run the plugin#

We determine whether we need to block this request by matching the configured rules and the obtained uri. If a block is needed, we call respond to directly respond, otherwise we let it go.

Future outlook#

More and more APISIX plugins are available to run on Envoy, and finally all APISIX plugins (Even that will be developed in the future) will be available to run on Envoy.

At the same time, we hope that we could work with the Envoy community in the direction of Lua Filter, optimize and improve Lua Filter, enhance the expansion capabilities of Envoy, and reduce the difficulty of Envoy expansion.