当前位置: 首页 > news >正文

网站备案主体 被拉黑semantic scholar

网站备案主体 被拉黑,semantic scholar,怎么做网站啊,同一服务器如何建设多个网站目录 1. 说明 2. 服务器规划 3. docker-compose文件 kafka{i}.yaml kafka-ui.yaml 4. kafka-ui配置集群监控 5. 参数表 6. 测试脚本 生产者-异步生产: AsyncKafkaProducer1.py 消费者-异步消费: AsyncKafkaConsumer1.py 7. 参考 1. 说明 创建一个本地开发环境所需的k…

目录

1. 说明

2. 服务器规划

3. docker-compose文件

kafka{i}.yaml

 kafka-ui.yaml

4. kafka-ui配置集群监控

5. 参数表

6. 测试脚本

生产者-异步生产: AsyncKafkaProducer1.py

消费者-异步消费: AsyncKafkaConsumer1.py

7. 参考


1. 说明

  • 创建一个本地开发环境所需的kafka集群
  • 分布在3个虚拟机上,以docker容器方式互联互通

2. 服务器规划

Host端口备注

host001.dev.sb

9092, 9093, 9081

kafka ui 访问

kafka0 节点

host002.dev.sb9092, 9093kafka1 节点
host003.dev.sb9092, 9093kafka2 节点

3. docker-compose文件

kafka{i}.yaml

- 其中 {i} 对应0,1,2

- 用户密码都配在文件里面

services:kafka:image: 'bitnami/kafka:3.6.2'container_name: kafka{i}hostname: kafka{i}restart: alwaysports:- 9092:9092- 9093:9093environment:# KRaft- KAFKA_CFG_NODE_ID={i}- KAFKA_CFG_PROCESS_ROLES=controller,broker- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka0:9093,1@kafka1:9093,2@kafka2:9093- KAFKA_KRAFT_CLUSTER_ID=sbcluster01-mnopqrstuv# Listeners- KAFKA_CFG_LISTENERS=INTERNAL://:9094,CLIENT://:9095,CONTROLLER://:9093,EXTERNAL://:9092- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_PLAINTEXT,CLIENT:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka0:9094,CLIENT://:9095,EXTERNAL://kafka0:9092- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER- KAFKA_CFG_NUM_PARTITIONS=3- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL# Clustering- KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=3- KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=3- KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR=2# Log- KAFKA_CFG_LOG_RETENTION_HOURS = 72# SASL- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN- KAFKA_CONTROLLER_USER=kfkuser- KAFKA_CONTROLLER_PASSWORD=youknow- KAFKA_INTER_BROKER_USER=kfkuser- KAFKA_INTER_BROKER_PASSWORD=youknow- KAFKA_CLIENT_USERS=kfkuser- KAFKA_CLIENT_PASSWORDS=youknow# Others- TZ=Asia/Shanghaivolumes:- '/data0/Server/Db/kafka0:/bitnami/kafka'extra_hosts: - "kafka0:172.16.20.60"- "kafka1:172.16.20.61"- "kafka2:172.16.20.62"
 kafka-ui.yaml
services:kafka-ui:image: 'provectuslabs/kafka-ui:master'container_name: kafka-uirestart: alwaysports:- 9081:8080environment:- KAFKA_CLUSTERS_0_NAME=local- DYNAMIC_CONFIG_ENABLED=true- AUTH_TYPE=LOGIN_FORM- SPRING_SECURITY_USER_NAME=admin- SPRING_SECURITY_USER_PASSWORD=youknowextra_hosts: - "kafka0:172.16.20.60"- "kafka1:172.16.20.61"- "kafka2:172.16.20.62"

4. kafka-ui配置集群监控

5. 参数表

参数说明
KAFKA_CFG_PROCESS_ROLES

kafka角色,做broker, controller

示例:
KAFKA_CFG_PROCESS_ROLES=controller,broker

KAFKA_KRAFT_CLUSTER_ID集群id, 同属节点需一样
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS投票选举列表
KAFKA_CFG_CONTROLLER_LISTENER_NAMES控制器名称
KAFKA_CFG_NUM_PARTITIONS默认分区数
KAFKA_CFG_LISTENERS监听器的地址和端口
KAFKA_CFG_ADVERTISED_LISTENERS发布监听器的地址和端口
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP监听器的协议 这里sasl_plain表示   仅认证加密 传输不加密
KAFKA_CLIENT_USERS加密客户端账号
KAFKA_CLIENT_PASSWORDS加密客户端密码
#Clustering
KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTORKafka 内部使用的 __consumer_offsets 主题的复制因子。这个主题是用来存储消费者偏移
KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTORKafka 内部使用的 __transaction_state 主题的复制因子。这个主题是用来存储事务日志
KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISRKafka 内部使用的 __transaction_state 主题的最小 ISR(In-Sync Replicas)数量。ISR 是与
leader 保持同步的副本集合
#Log
KAFKA_CFG_LOG_DIRS日志目录
KAFKA_CFG_LOG_RETENTION_HOURS数据存储的最大时间超过这个时间会根据log.cleanup.policy设置的策略处理,默认168小时,一周时间

6. 测试脚本

生产者-异步生产: AsyncKafkaProducer1.py
from confluent_kafka import Producer
import jsondef delivery_report(err, msg):"""Called once for each message produced to indicate delivery result.Triggered by poll() or flush()."""if err is not None:print(f"Message delivery failed: {err}")else:print(f"Message delivered to {msg.topic()} [{msg.partition()}]")def create_async_producer(config):"""Creates an instance of an asynchronous Kafka producer."""return Producer(config)def produce_messages(producer, topic, messages):"""Asynchronously produces messages to a Kafka topic."""for message in messages:# Trigger any available delivery report callbacks from previous produce() callsproducer.poll(0)# Asynchronously produce a message, the delivery report callback# will be triggered from poll() above, or flush() below, when the message has# been successfully delivered or failed permanently.producer.produce(topic, json.dumps(message).encode("utf-8"), callback=delivery_report)# Wait for any outstanding messages to be delivered and delivery report# callbacks to be triggered.producer.flush()if __name__ == "__main__":# Kafka configuration# Replace these with your server's configurationconf = {"bootstrap.servers": "host001.dev.sb:9092,host002.dev.sb:9092,host003.dev.sb:9092","client.id": "PythonProducer","security.protocol": "SASL_PLAINTEXT","sasl.mechanisms": "PLAIN","sasl.username": "kfkuser","sasl.password": "youknow",}# Create an asynchronous Kafka producerasync_producer = create_async_producer(conf)# Messages to send to Kafkamessages_to_send = [{"key": "value1a"}, {"key": "value2a"}, {"key": "value3a"}]# Produce messagesproduce_messages(async_producer, "zx001.msg.user", messages_to_send)
消费者-异步消费: AsyncKafkaConsumer1.py
from confluent_kafka import Consumer, KafkaError, KafkaException
import asyncio
import json
import logging
from datetime import datetime# 设置日志格式,'%()'表示日志参数
log_format = "%(message)s"
logging.basicConfig(filename="logs/kafka_messages1.log", format=log_format, level=logging.INFO
)async def consume_loop(consumer, topics):try:# 订阅主题consumer.subscribe(topics)while True:# 轮询消息msg = consumer.poll(timeout=1.0)if msg is None:continueif msg.error():if msg.error().code() == KafkaError._PARTITION_EOF:# End of partition eventprint("%% %s [%d] reached end at offset %d\n"% (msg.topic(), msg.partition(), msg.offset()))elif msg.error():raise KafkaException(msg.error())else:# 正常消息raw_message = msg.value()# print(f"Raw message: {raw_message}")str_msg = raw_message.decode("utf-8")parsed_message = json.loads(str_msg)parsed_message["time"] = datetime.now().strftime("%Y-%m-%d %H:%M:%S")print(f"Received message: {type(parsed_message)} : {parsed_message}")json_data = json.dumps(parsed_message, ensure_ascii=False)logging.info("{}".format(json_data))await asyncio.sleep(0.01)  # 小睡片刻,让出控制权finally:# 关闭消费者consumer.close()async def consume():# 消费者配置conf = {"bootstrap.servers": "host001.dev.sb:9092,host002.dev.sb:9092,host003.dev.sb:9092","group.id": "MsgGroup2","auto.offset.reset": "earliest","client.id" :  "PythonConsumer","security.protocol" :  "SASL_PLAINTEXT","sasl.mechanisms" :  "PLAIN","sasl.username" :  "kfkuser","sasl.password" :  "youknow"}# 创建消费者consumer = Consumer(conf)await consume_loop(consumer, ["zx001.msg.user"])if __name__ == "__main__":asyncio.run(consume())

7. 参考

- Apache Kafka® Quick Start - Local Install With Docker

- kafka-ui-docs/configuration/configuration-wizard.md at main · provectus/kafka-ui-docs · GitHub

- https://juejin.cn/post/7187301063832109112

http://www.dinnco.com/news/87278.html

相关文章:

  • 网站建设需要用到哪些软件免费网站制作软件平台
  • 网站建设需要包含什么常用的网络推广方法
  • 公司网站建设接单精准引流推广
  • 怎样找到免费的黄页网站怎么开网站详细步骤
  • 网站开发运营公司绩效提成方案在运营中seo是什么意思
  • 国外网站排名前十郑州seo优化外包顾问
  • 怎么做一键添加信任网站做外贸怎么推广
  • dw制作asp网站模板下载怎么做seo网站关键词优化
  • 杭州定制网站长沙网络推广哪家
  • 家居网站建设效果三亚网络推广
  • 商标设计网课厉害的seo顾问
  • html5期末大作业个人网站制作做seo用哪种建站程序最好
  • 做网站排名优化是怎么回事嘉峪关seo
  • 百度网站建设是什么意思百度关键词推广教程
  • 公众号平台网站开发引擎优化
  • 网站上做皮肤测试广告公司网上接单平台
  • 网站负责人信息表百度收录入口在哪里查询
  • 班级网站怎么做ppt模板精准营销方式有哪些
  • ie浏览器哪个做网站稳定免费网站注册com
  • 网站如何做公安部备案南京seo整站优化技术
  • 比较好的网站开发教学网站搜索引擎优化的目的是
  • 做网站分流百度商家平台客服电话
  • 网站的建设服务最佳搜索引擎磁力
  • 网站公司怎么做运营网站如何推广出去
  • 国内自适应网站济宁seo公司
  • 类qq留言网站建设成品短视频app下载有哪些软件
  • 眉山市住房和城乡建设局网站站长之家的seo综合查询工具
  • 常州网站外包营销网站建设哪家快
  • 安徽旅游集团网站建设微信社群营销推广方案
  • 自己的电脑做网站服务器 买的服务器 速度网络黄页推广软件哪个好