elasticsearch-logger
#
DescriptionThe elasticsearch-logger
Plugin is used to forward logs to Elasticsearch for analysis and storage.
When the Plugin is enabled, APISIX will serialize the request context information to Elasticsearch Bulk format and submit it to the batch queue. When the maximum batch size is exceeded, the data in the queue is pushed to Elasticsearch. See batch processor for more details.
#
AttributesName | Type | Required | Default | Description |
---|---|---|---|---|
endpoint_addr | string | True | Elasticsearch API. | |
field | array | True | Elasticsearch field configuration. | |
field.index | string | True | Elasticsearch _index field. | |
field.type | string | False | Elasticsearch default value | Elasticsearch _type field. |
auth | array | False | Elasticsearch authentication configuration. | |
auth.username | string | True | Elasticsearch authentication username. | |
auth.password | string | True | Elasticsearch authentication password. | |
ssl_verify | boolean | False | true | When set to true enables SSL verification as per OpenResty docs. |
timeout | integer | False | 10 | Elasticsearch send data timeout in seconds. |
This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every 5
seconds or when the data in the queue reaches 1000
. See Batch Processor for more information or setting your custom configuration.
#
Enabling the Plugin#
Full configurationThe example below shows a complete configuration of the Plugin on a specific Route:
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"plugins":{
"elasticsearch-logger":{
"endpoint_addr":"http://127.0.0.1:9200",
"field":{
"index":"services",
"type":"collector"
},
"auth":{
"username":"elastic",
"password":"123456"
},
"ssl_verify":false,
"timeout": 60,
"retry_delay":1,
"buffer_duration":60,
"max_retry_count":0,
"batch_max_size":1000,
"inactive_timeout":5,
"name":"elasticsearch-logger"
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
},
"uri":"/elasticsearch.do"
}'
#
Minimal configuration exampleThe example below shows a bare minimum configuration of the Plugin on a Route:
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"plugins":{
"elasticsearch-logger":{
"endpoint_addr":"http://127.0.0.1:9200",
"field":{
"index":"services"
}
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
},
"uri":"/elasticsearch.do"
}'
#
Example usageOnce you have configured the Route to use the Plugin, when you make a request to APISIX, it will be logged in your Elasticsearch server:
curl -i http://127.0.0.1:9080/elasticsearch.do\?q\=hello
HTTP/1.1 200 OK
...
hello, world
You should be able to get the log from elasticsearch:
curl -X GET "http://127.0.0.1:9200/services/_search" | jq .
{
"took": 0,
...
"hits": [
{
"_index": "services",
"_type": "_doc",
"_id": "M1qAxYIBRmRqWkmH4Wya",
"_score": 1,
"_source": {
"apisix_latency": 0,
"route_id": "1",
"server": {
"version": "2.15.0",
"hostname": "apisix"
},
"request": {
"size": 102,
"uri": "/elasticsearch.do?q=hello",
"querystring": {
"q": "hello"
},
"headers": {
"user-agent": "curl/7.29.0",
"host": "127.0.0.1:9080",
"accept": "*/*"
},
"url": "http://127.0.0.1:9080/elasticsearch.do?q=hello",
"method": "GET"
},
"service_id": "",
"latency": 0,
"upstream": "127.0.0.1:1980",
"upstream_latency": 1,
"client_ip": "127.0.0.1",
"start_time": 1661170929107,
"response": {
"size": 192,
"headers": {
"date": "Mon, 22 Aug 2022 12:22:09 GMT",
"server": "APISIX/2.15.0",
"content-type": "text/plain; charset=utf-8",
"connection": "close",
"transfer-encoding": "chunked"
},
"status": 200
}
}
}
]
}
}
#
MetadataYou can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:
Name | Type | Required | Default | Description |
---|---|---|---|---|
log_format | object | False | {"host": "$host", "@timestamp": "$time_iso8601", "client_ip": "$remote_addr"} | Log format declared as key value pairs in JSON format. Values only support strings. APISIX or Nginx variables can be used by prefixing the string with $ . |
IMPORTANT
Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the elasticsearch-logger
Plugin.
The example below shows how you can configure through the Admin API:
curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/elasticsearch-logger \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
With this configuration, your logs would be formatted as shown below:
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
make a request to APISIX again:
curl -i http://127.0.0.1:9080/elasticsearch.do\?q\=hello
HTTP/1.1 200 OK
...
hello, world
You should be able to get this log from elasticsearch:
curl -X GET "http://127.0.0.1:9200/services/_search" | jq .
{
"took": 0,
...
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 1,
"hits": [
{
"_index": "services",
"_type": "_doc",
"_id": "NVqExYIBRmRqWkmH4WwG",
"_score": 1,
"_source": {
"@timestamp": "2022-08-22T20:26:31+08:00",
"client_ip": "127.0.0.1",
"host": "127.0.0.1",
"route_id": "1"
}
}
]
}
}
#
Disable Metadatacurl http://127.0.0.1:9180/apisix/admin/plugin_metadata/elasticsearch-logger \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X DELETE
#
Disable PluginTo disable the elasticsearch-logger
Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.
curl http://127.0.0.1:9180/apisix/admin/routes/1 \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"plugins":{},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
},
"uri":"/elasticsearch.do"
}'