当前位置: 首页 > news >正文

【ELK】window下ELK的安装与部署

ELK的安装与部署

  • 1. 下载
  • 2. 配置&启动
    • 2.1 elasticsarch
      • 2.1.1 生成证书
      • 2.1.2 生成秘钥
      • 2.1.3 将凭证迁移到指定目录
      • 2.1.4 改配置
      • 2.1.5 启动
      • 2.1.6 访问测试
      • 2.1.7 生成kibana账号
    • 2.2 kibana
      • 2.2.1 改配置
      • 2.2.2 启动
      • 2.2.3 访问测试
    • 2.3 logstash
      • 2.3.1 改配置
      • 2.3.2 启动
    • 2.4 filebeat
      • 2.4.1 改配置
      • 2.4.2 启动

1. 下载

官网地址:https://www.elastic.co/

下载地址:Download Elasticsearch | Elastic

其他版本或产品: Past Releases of Elastic Stack Software | Elastic

IK地址:https://github.com/infinilabs/analysis-ik/releases

需下载内容:

在这里插入图片描述

所有产品版本需保持一致

2. 配置&启动

2.1 elasticsarch

2.1.1 生成证书

切换到bin目录下,执行

elasticsearch-certutil.bat ca

碰到第一个直接回车 文件将会生成在当前文件夹下
碰到第二个输入密码,例如123456
在这里插入图片描述
完成后会生成一个文件:elastic-stack-ca.p12

在这里插入图片描述

2.1.2 生成秘钥

elasticsearch-certutil.bat cert --ca ./elastic-stack-ca.p12

第一个输入密码后回车
第二个不需要输入直接回车
第三个确认密码后回车,之后生成一个文件:elastic-certificates.p12

在这里插入图片描述

2.1.3 将凭证迁移到指定目录

将生成的elastic-stack-ca.p12elastic-certificates.p12放置在E:\software\elasticsearch-8.9.2\config\certificates目录下

在这里插入图片描述

2.1.4 改配置

修改config/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-elatics
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: E:\ELK\elasticsearch-8.9.2\data
#
# Path to log files:
#
path.logs: E:\ELK\elasticsearch-8.9.2\logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 22-04-2024 01:19:26
#
# --------------------------------------------------------------------------------# Enable security featuresxpack.security.enabled: truexpack.security.enrollment.enabled: true#xpack.security.http.ssl:
#  enabled: true
#  keystore.path: certs/http.p12xpack.security.transport.ssl:enabled: trueverification_mode: certificatekeystore.path: E:\\software\\elasticsearch-8.9.2\\config\\certificates\\elastic-certificates.p12keystore.password: 123456truststore.path: E:\\software\\elasticsearch-8.9.2\\config\\certificates\\elastic-certificates.p12truststore.password: 123456

2.1.5 启动

在这里插入图片描述
双击执行即可,初次执行会返回密码及token等信息 需记录下来 内容如下:

在这里插入图片描述

2.1.6 访问测试

访问:http://localhost:9200/ 返回以下信息 ,提示需输入账号密码,账号为 elastic 密码在启动时输出信息中可见,确认后展示以下信息:

在这里插入图片描述

成功!!

2.1.7 生成kibana账号

elastic账号是无法用于kibana的登陆的,所以需要自行创建账号,并授权,cmd定位到es运行时(bin)目录输入以下命令

elasticsearch-users useradd 用户名

接着会提示输入密码,键入密码即可.这里用户创建完成

角色授权操作

elasticsearch-users roles -a superuser 用户名
elasticsearch-users roles -a kibana_system 用户名

superuser能正常打开es的9200端口,kibana_system配置后才可以正常对接kb和es

查看授权

elasticsearch-users roles -v 用户名

在这里插入图片描述
授权成功.

2.2 kibana

2.2.1 改配置

修改config/kibana.yml

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: true
#server.ssl.certificate: E:\ELK\kibana-8.9.2\config\certs\http_ca.crt
#server.ssl.key: /path/to/your/server.key# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
#enterpriseSearch.host: 'http://localhost:3002'# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"elasticsearch.hosts: ["http://localhost:9200"]elasticsearch.username: "user-kibana"
elasticsearch.password: "123456"# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
#elasticsearch.serviceAccountToken: "eyJ2ZXIiOiI4LjkuMiIsImFkciI6WyIxOTIuMTY4LjMuNzQ6OTIwMCJdLCJmZ3IiOiI4MjBlYjQwZWJlZDZkMjczZjU0NzM5ZWY2Y2EyNDFkOTY3YmYzMDFlYWYyMGU5M2E4YWRkNGExMjlhNTBlMzAwIiwia2V5IjoiRzA1akE0OEI4R3Rabk1QRmNXVUU6Um45QmsxRWtTWU9pcFlxdW1haVlQQSJ9"# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
logging.root.level: info# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
#  type: file
#  fileName: /var/logs/kibana.log
#  layout:
#    type: json# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
i18n.locale: "zh-CN"# =================== Frequently used (Optional)===================# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

2.2.2 启动

在这里插入图片描述

双击

在这里插入图片描述

启动成功

2.2.3 访问测试

访问http://localhost:5601,登录账号和密码

在这里插入图片描述

在这里插入图片描述

2.3 logstash

2.3.1 改配置

修改config/logstash-sample文件,也可复制一份修改

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.input {beats {port => 5044}
}output {file {path => "E:\ELK\logstash-8.9.2\logstash-test.log"                        #在web1节点本地生成一份日志文件}elasticsearch {hosts => ["http://localhost:9200"]index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"user => "elastic"password => "QFpuWk-MOuwIwXsE=TM6"}
}

2.3.2 启动

在bin目录下执行

logstash.bat -f ./config/logstash-sample.conf

在这里插入图片描述

成功

2.4 filebeat

2.4.1 改配置

修改filebeat.yml

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.# ============================== Filebeat inputs ===============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.# filestream is an input for collecting log messages from files.
- type: filestream# Unique ID among all inputs, an ID is required.id: my-filestream-id# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:- E:\logs\*.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.# Line filtering happens after the parsers pipeline. If you would like to filter lines# before parsers, use include_message parser.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.# Line filtering happens after the parsers pipeline. If you would like to filter lines# before parsers, use include_message parser.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#prospector.scanner.exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1# ============================== Filebeat modules ==============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s# ======================= Elasticsearch template setting =======================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false# ================================== General ===================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:# =================================== Kibana ===================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:# =============================== Elastic Cloud ================================# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:# ================================== Outputs ===================================# Configure what output to use when sending the data collected by the beat.# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:# Array of hosts to connect to.# hosts: ["localhost:9200"]# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "changeme"# ------------------------------ Logstash Output -------------------------------
output.logstash:# The Logstash hostshosts: ["localhost:5044"]enabled: true# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"# ================================= Processors =================================
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~# ================================== Logging ===================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#monitoring.enabled: false# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:# ============================== Instrumentation ===============================# Instrumentation support for the filebeat.
#instrumentation:# Set to true to enable instrumentation of filebeat.#enabled: false# Environment in which filebeat is running on (eg: staging, production, etc.)#environment: ""# APM Server hosts to report instrumentation results to.#hosts:#  - http://localhost:8200# API Key for the APM Server(s).# If api_key is set then secret_token will be ignored.#api_key:# Secret token for the APM Server(s).#secret_token:# ================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

2.4.2 启动

在bin目录下执行

filebeat.exe -e -c filebeat.yml

在这里插入图片描述

相关文章:

  • 北京网站建设多少钱?
  • 辽宁网页制作哪家好_网站建设
  • 高端品牌网站建设_汉中网站制作
  • 【算法专题】双指针算法之LCR 179. 查找总价格为目标值的两个商品(力扣)
  • 【已解决】服务器无法联网与更换镜像源
  • JavaScript之WebAPIs-BOM
  • TCP重传机制详解
  • 【BUG】已解决:requests.exceptions.ProxyError: HTTPSConnectionPool
  • Python自动化DevOps任务入门
  • go语言Gin框架的学习路线(七)
  • python调用chrome浏览器自动化如何选择元素
  • 函数(递归)
  • 【JAVA】数据类型及变量
  • Android Navigation 组件原理和使用教程
  • 面试问题:React基本概念,和所遇到的CPU和IO问题
  • ​必胜客礼品卡回收多少钱,回收平台哪家好
  • Java面试题--JVM大厂篇之深入解析JVM中的Serial GC:工作原理与代际区别
  • spdlog源码学习:std::unique_ptr订制删除器,guard用法,以及decltype
  • Java超时控制的实现
  • JS进阶 - JS 、JS-Web-API与DOM、BOM
  • Laravel深入学习6 - 应用体系结构:解耦事件处理器
  • Linux中的硬链接与软链接
  • Python利用正则抓取网页内容保存到本地
  • 高度不固定时垂直居中
  • 人脸识别最新开发经验demo
  • 深度学习中的信息论知识详解
  • 思考 CSS 架构
  • 远离DoS攻击 Windows Server 2016发布DNS政策
  • 云大使推广中的常见热门问题
  • Java数据解析之JSON
  • 基于django的视频点播网站开发-step3-注册登录功能 ...
  • ​总结MySQL 的一些知识点:MySQL 选择数据库​
  • ‌分布式计算技术与复杂算法优化:‌现代数据处理的基石
  • #etcd#安装时出错
  • #调用传感器数据_Flink使用函数之监控传感器温度上升提醒
  • (04)Hive的相关概念——order by 、sort by、distribute by 、cluster by
  • (CVPRW,2024)可学习的提示:遥感领域小样本语义分割
  • (C语言)输入自定义个数的整数,打印出最大值和最小值
  • (C语言版)链表(三)——实现双向链表创建、删除、插入、释放内存等简单操作...
  • (JS基础)String 类型
  • (poj1.2.1)1970(筛选法模拟)
  • (阿里云万网)-域名注册购买实名流程
  • (附源码)spring boot校园健康监测管理系统 毕业设计 151047
  • (黑马C++)L06 重载与继承
  • (四)JPA - JQPL 实现增删改查
  • (一)插入排序
  • (译) 函数式 JS #1:简介
  • (转)Java socket中关闭IO流后,发生什么事?(以关闭输出流为例) .
  • (转)Linq学习笔记
  • (自用)网络编程
  • ***汇编语言 实验16 编写包含多个功能子程序的中断例程
  • **PHP二维数组遍历时同时赋值
  • .Family_物联网
  • .NET C# 使用 SetWindowsHookEx 监听鼠标或键盘消息以及此方法的坑
  • .net core 实现redis分片_基于 Redis 的分布式任务调度框架 earth-frost
  • .Net Framework 4.x 程序到底运行在哪个 CLR 版本之上
  • .NET 中各种混淆(Obfuscation)的含义、原理、实际效果和不同级别的差异(使用 SmartAssembly)
  • .Net6支持的操作系统版本(.net8已来,你还在用.netframework4.5吗)