安装Docker:
apt -y update
apt -y install curl git
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
克隆存储库:
cd /opt
git clone https://github.com/zulip/docker-zulip.git
cd docker-zulip
Zulip的compose编排使用了docker secrets来存储敏感数据。我们需要新建一个.env文件:
nano .env
至少写入以下内容,这里包括但不限于PG数据库密码、REDIS密码、您的SMTP邮件密码等:
ZULIP__POSTGRES_PASSWORD=example_postgres_password
ZULIP__MEMCACHED_PASSWORD=example_memcached_password
ZULIP__RABBITMQ_PASSWORD=example_rabbitmq_password
ZULIP__REDIS_PASSWORD=example_redis_password
ZULIP__SECRET_KEY=example_django_secret_key
ZULIP__EMAIL_PASSWORD=example_outgoing_email_password
编辑compose.override.yaml覆盖文件:
nano compose.override.yaml
在这个覆盖文件内至少需要修改以下内容:
services:
zulip:
environment:
SETTING_EXTERNAL_HOST: "zulip.example.com"
SETTING_ZULIP_ADMINISTRATOR: "[email protected]"
TRUST_GATEWAY_IP: True
SETTING_EMAIL_HOST: "mail.example.com"
SETTING_EMAIL_HOST_USER: "smtp"
SETTING_EMAIL_PORT: "587"
SETTING_EMAIL_USE_SSL: False
SETTING_EMAIL_USE_TLS: True
SETTING_ZULIP_SERVICE_PUSH_NOTIFICATIONS: True
SETTING_ZULIP_SERVICE_SUBMIT_USAGE_STATISTICS: False
编辑compose.yaml基础文件:
nano compose.yaml
我们使用反向代理,所以在这里注释掉Zulip容器的443端口,同时将对外暴露的80端口改为8089:
---
services:
zulip:
image: "ghcr.io/zulip/zulip-server:11.6-1"
restart: unless-stopped
build:
context: .
ports:
- name: smtp
target: 25
published: 25
app_protocol: smtp
- name: http
target: 80
published: 8089
app_protocol: http
# - name: https
# target: 443
# published: 443
# app_protocol: https
拉取镜像并初始化Zulip:
docker compose pull
docker compose run --rm zulip app:init
如果一切正常,您应该看到类似下图的回显:
如果输出结果并非以该内容结尾:
=== End Initial Configuration Phase ===
请仔细阅读输出结果以查找警告或错误。
现在就可以启动Zulip了:
docker compose up zulip --wait
配置Ferron反向代理:
nano /etc/ferron.kdl
写入如下内容:
zulip.example.com {
proxy "http://127.0.0.1:8089/"
proxy_request_header_replace "Host" "{header:Host}"
}
重载Ferron:
systemctl reload ferron
生成一个链接,在浏览器访问该链接创建新组织:
docker compose exec -u zulip zulip \
/home/zulip/deployments/current/manage.py generate_realm_creation_link
如图所示:
效果:
关于移动推送通知,除了设置SETTING_ZULIP_SERVICE_PUSH_NOTIFICATIONS: True,还需要执行如下命令注册才能使用:
docker compose exec -u zulip zulip \
/home/zulip/deployments/current/manage.py register_server
如果您的hostname(完全限定域名fqdn)之前已经注册过,可以使用下面的命令迁移注册:
docker compose exec -u zulip zulip \
/home/zulip/deployments/current/manage.py register_server --registration-transfer
排查错误:
docker compose exec zulip bash
cat /var/log/zulip/errors.log
]]>中转鸡配置:
{
"log": {
"level": "info"
},
"dns": {
"servers": [
{
"type": "tls",
"server": "8.8.8.8"
}
]
},
"inbounds": [
{
"type": "anytls",
"listen": "0.0.0.0",
"listen_port": 8443,
"users": [
{
"name": "fuckccp",
"password": "hidden"
}
],
"padding_scheme": [
"stop=8",
"0=30-30",
"1=100-400",
"2=400-500,c,500-1000,c,500-1000,c,500-1000,c,500-1000",
"3=9-9,500-1000",
"4=500-1000",
"5=500-1000",
"6=500-1000",
"7=500-1000"
],
"tls": {
"enabled": true,
"server_name": "fuckccp.example.com",
"alpn": [
"h2"
],
"acme": {
"domain": [
"fuckccp.example.com"
],
"dns01_challenge": {
"provider": "cloudflare",
"api_token": "hidden"
}
}
}
}
],
"outbounds": [
{
"type": "direct",
"tag": "direct"
},
{
"type": "shadowsocks",
"tag": "unlock-out",
"server": "1.2.3.4",
"server_port": 8081,
"method": "chacha20-ietf-poly1305",
"password": "hidden"
}
],
"route": {
"rules": [
{
"action": "sniff"
},
{
"protocol": "dns",
"action": "hijack-dns"
},
{
"rule_set": [
"geosite-dmm",
"geosite-dmm-porn",
"geosite-abema",
"custom-mgstage"
],
"outbound": "unlock-out"
}
],
"rule_set": [
{
"type": "local",
"tag": "custom-mgstage",
"format": "binary",
"path": "/root/mgstage.srs"
},
{
"type": "remote",
"tag": "geosite-dmm",
"format": "binary",
"url": "https://raw.githubusercontent.com/SagerNet/sing-geosite/rule-set/geosite-dmm.srs",
"download_detour": "direct",
"update_interval": "7d"
},
{
"type": "remote",
"tag": "geosite-dmm-porn",
"format": "binary",
"url": "https://raw.githubusercontent.com/SagerNet/sing-geosite/rule-set/geosite-dmm-porn.srs",
"download_detour": "direct",
"update_interval": "7d"
},
{
"type": "remote",
"tag": "geosite-abema",
"format": "binary",
"url": "https://raw.githubusercontent.com/SagerNet/sing-geosite/rule-set/geosite-abema.srs",
"download_detour": "direct",
"update_interval": "7d"
}
]
}
}
落地鸡配置:
{
"log": {
"level": "info"
},
"dns": {
"servers": [
{
"type": "tls",
"server": "8.8.8.8"
}
]
},
"endpoints": [
{
"type": "wireguard",
"tag": "wg-unlock",
"system": true,
"name": "wg0",
"mtu": 1280,
"address": [
"10.0.0.2/32"
],
"private_key": "hidden",
"peers": [
{
"address": "engage.cloudflareclient.com",
"port": 2408,
"public_key": "hidden",
"allowed_ips": [
"0.0.0.0/0"
],
"persistent_keepalive_interval": 30,
"reserved": [0, 0, 0]
}
]
}
],
"inbounds": [
{
"type": "shadowsocks",
"listen": "::",
"listen_port": 8081,
"method": "chacha20-ietf-poly1305",
"password": "hidden"
}
],
"outbounds": [
{
"type": "direct",
"tag": "direct"
}
],
"route": {
"rules": [
{
"action": "sniff"
},
{
"protocol": "dns",
"action": "hijack-dns"
},
{
"rule_set": [
"geosite-dmm",
"geosite-dmm-porn",
"geosite-abema",
"custom-mgstage"
],
"outbound": "wg-unlock"
}
],
"rule_set": [
{
"type": "local",
"tag": "custom-mgstage",
"format": "binary",
"path": "/root/mgstage.srs"
},
{
"type": "remote",
"tag": "geosite-dmm",
"format": "binary",
"url": "https://raw.githubusercontent.com/SagerNet/sing-geosite/rule-set/geosite-dmm.srs",
"download_detour": "direct",
"update_interval": "7d"
},
{
"type": "remote",
"tag": "geosite-dmm-porn",
"format": "binary",
"url": "https://raw.githubusercontent.com/SagerNet/sing-geosite/rule-set/geosite-dmm-porn.srs",
"download_detour": "direct",
"update_interval": "7d"
},
{
"type": "remote",
"tag": "geosite-abema",
"format": "binary",
"url": "https://raw.githubusercontent.com/SagerNet/sing-geosite/rule-set/geosite-abema.srs",
"download_detour": "direct",
"update_interval": "7d"
}
]
}
}
mgstage.srs是我自定义的一个规则,用于解锁MGSTAGE,不需要可以移除,需要的话可以新建一个json文件:
nano mgstage.json
写入如下内容:
{
"version": 3,
"rules": [
{
"domain_suffix": [
"mgstage.com"
]
}
]
}
编译成srs格式:
sing-box rule-set compile mgstage.json
然后落地鸡解锁用到的wiregurad配置我是使用wgcf生成的,先安装wgcf:
wget https://github.com/ViRb3/wgcf/releases/download/v2.2.30/wgcf_2.2.30_linux_amd64
mv wgcf_2.2.30_linux_amd64 wgcf
chmod +x wgcf
生成wireguard配置:
./wgcf register
./wgcf generate
查看wireguard配置,把里面的PrivateKey和PublicKey复制粘贴到sing-box的配置文件内就行了:
cat wgcf-profile.conf
这样客户端连接中转鸡的anytls节点,匹配到DMM的流量会通过shadowsocks分流到落地鸡,落地鸡再把DMM流量分流到wireguard。其他流量还是直接走中转鸡。
]]>安装Docker:
apt -y update
apt -y install curl
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
创建目录和compose文件:
mkdir /opt/blufiles && cd /opt/blufiles && nano docker-compose.yml
写入如下内容:
services:
blufiles:
image: ghcr.io/bludood/files:latest
restart: unless-stopped
ports:
- 127.0.0.1:1337:1337
volumes:
- ./data:/data
environment:
- DATABASE_URL=postgresql://postgres:dbpassword@postgres:5432/files
- STORAGE_DIR=/data
- TRUST_PROXY=true
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:16
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: dbpassword
POSTGRES_DB: files
volumes:
- ./pgdata:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
启动:
docker compose up -d
配置Ferron反向代理:
nano /etc/ferron.kdl
写入如下内容:
blufiles.example.com {
proxy "http://127.0.0.1:1337/"
proxy_request_header_replace "Host" "{header:Host}"
}
重载Ferron:
systemctl reload ferron
效果:
]]>可以用这个命令检查,如果metadata接近满了那就是了:
sudo btrfs fi usage /
此时大概率没办法直接做balance,直接balance的话也报:no space left on device。
解决办法,在内存里面创建一个1GB大小的临时文件,机子内存够大的话也可以创建更大的文件:
truncate -s 1G /tmp/btrfs_rescue.img
设置为回环设备:
sudo losetup /dev/loop50 /tmp/btrfs_rescue.img
把这个回环设备临时加入文件系统
sudo btrfs device add /dev/loop50 /
这个时候就可以执行balance了,但是我发现也空闲不出来多大的元数据空间,所以我干脆把DUP改成single,而且我这hdd用single性能还会好一些:
sudo btrfs balance start -mconvert=single --force /
收尾工作,移除刚加入的回环设备:
sudo btrfs device remove /dev/loop50 /
断开回环设备:
sudo losetup -d /dev/loop50
删除临时文件:
rm /tmp/btrfs_rescue.img
再balance一次,把dusage设置的激进一些,因为data空闲的空间比较多:
sudo btrfs balance start -dusage=70 /
正常了:
[可选]DUP改single会降低系统安全性,不放心的话可以改回去:
sudo btrfs balance start -mconvert=dup /
改回去的话应该会多占用2.56GB,但现在Unallocated有17.9GB,绰绰有余。
]]>技术栈:
当前已实现的功能:
文档还没有全部写好,还有一些细节也没有到位,这篇文章先记录一下Docker的部署步骤。
安装Docker / NGINX / Certbot:
apt -y update
apt -y install curl git nginx python3-certbot-nginx
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
克隆项目存储库并准备数据目录:
git clone https://github.com/xiya233/MoeVideo.git
cd MoeVideo
mkdir -p data/db data/storage data/temp data/redis
编辑.env.docker:
nano .env.docker
假设您使用的域名是:app.example.com (前端)/ api.example.com (后端)至少需要修改下面的内容:
# 前端连接后端的API地址
NEXT_PUBLIC_API_BASE_URL=https://api.example.com/api/v1
# 与NEXT_PUBLIC_API_BASE_URL保持一致
API_BASE_URL=https://api.example.com/api/v1
# 建议至少 32 位高强度随机字符串
JWT_SECRET=replace-this-with-a-very-strong-secret
# 后端API地址
PUBLIC_BASE_URL=https://api.example.com
# HTTPS必须启用
AUTH_COOKIE_SECURE=true
# 留空
AUTH_COOKIE_DOMAIN=
# SameSite建议lax, 兼顾安全和常规站内跳转
AUTH_COOKIE_SAMESITE=lax
AUTH_COOKIE_PATH=/
# 设置前端域名
CORS_ALLOWED_ORIGINS=https://app.example.com
# 改大一点防止因下载速度慢导致超时重试
IMPORT_URL_TIMEOUT_SEC=18000
# 改大一点防止因网速慢导致页面解析超时
IMPORT_PAGE_RESOLVER_TIMEOUT_SEC=90
# 强制走 fallback 的域名列表, 逗号分隔,如不需要可留空
IMPORT_FORCE_FALLBACK_DOMAINS=missav.ai,24av.net
使用GHCR预构建镜像启动:
docker compose --env-file .env.docker -f docker-compose.yml -f docker-compose.ghcr.yml pull
docker compose --env-file .env.docker -f docker-compose.yml -f docker-compose.ghcr.yml up -d --no-build
创建管理员账号:
docker compose --env-file .env.docker run --rm backend \
/app/moevideo-admin bootstrap \
--email [email protected] \
--username admin \
--password 'ChangeMe-StrongPassw0rd!' \
--db /data/db/moevideo.db
新建NGINX站点配置文件:
nano /etc/nginx/sites-available/moevideo.conf
写入如下内容:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream moevideo_frontend {
server 127.0.0.1:3000;
}
upstream moevideo_backend {
server 127.0.0.1:8080;
}
server {
listen 80;
listen [::]:80;
server_name app.example.com;
client_max_body_size 2048m;
location / {
proxy_pass http://moevideo_frontend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
server {
listen 80;
listen [::]:80;
server_name api.example.com;
client_max_body_size 2048m;
location /api/ {
proxy_pass http://moevideo_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location /media/ {
proxy_pass http://moevideo_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /healthz {
proxy_pass http://moevideo_backend/healthz;
}
}
启用站点:
ln -s /etc/nginx/sites-available/moevideo.conf /etc/nginx/sites-enabled/moevideo.conf
申请证书:
certbot --nginx -d app.example.com -d api.example.com
登录后访问/admin即可打开管理员面板,建议修改一下yt-dlp的配置,提升下载速度:
--concurrent-fragments 8 --fragment-retries 10 --retries 10 --socket-timeout 30 --force-ipv4 --impersonate Chrome-136
如图所示:
项目截图:
目前s3存储还没有测试,不知道有没有什么bug,建议直接用本地存储。
然后是回落抓取网页媒体链接的功能,逻辑是这样的:yt-dlp支持的站点直接走yt-dlp下载,yt-dlp如提示不支持的url(unsupported url),则走rebrowser-playwright + chromium探测页面的媒体链接,用户手动选择抓取出来的链接再喂给yt-dlp下载。但是目前只对yt-dlp不支持的url(unsupported url)这一退出行为执行回落操作,其他报错退出不做回落操作,比如被cf拦截了,或者页面报403,等等这些都是不会自动回落的。
所以这里有个问题就是要让yt-dlp识别目标网站支不支持得先能够让它成功访问到目标页面,如果本身就被cf拦截或者其他莫名其妙的原因导致无法访问目标页面,yt-dlp退出的行为可能有很多种,只要不是不支持的url(unsupported url)退出的就不会回落,所以我增加了一个功能就是强制(指定)回落:用户可以设置指定域名直接走rebrowser-playwright + chromium。这个功能我测试了下载missav,24av等x站都是没问题的,或者说我就是专门为了下这几个站的视频特地做的= =
]]>特点(摘自项目页面):
警告(摘自项目页面):
此工具仅供教育用途。篡改BitTorrent追踪器的上传/下载统计数据可能违反私有追踪器的服务条款,并可能导致账户被暂停或封禁。使用风险自负。
作者建议搭配VPN使用,我个人觉得如果Rustatio本身就部署在VPS上的话,用不用VPN不太重要。最主要还是看相应PT站的规则,比如MT是禁止用VPN的,你挂个VPN还可能获得适得其反的效果= =我平时也不用那些收费的VPN,但为了演示Rustatio完整的功能,这里还是介绍下配置VPN的步骤,这里我就拿CloudFlare的WARP演示了。
先下载wgcf:
wget https://github.com/ViRb3/wgcf/releases/download/v2.2.30/wgcf_2.2.30_linux_amd64
mv wgcf_2.2.30_linux_amd64 wgcf
chmod +x wgcf
用wgcf生成wireguard配置:
./wgcf register
./wgcf generate
查看wgcf-profile.conf:
cat wgcf-profile.conf
正常的话会有这些内容,把PrivateKey和PublicKey保存好:
[Interface]
PrivateKey =
Address = 172.16.0.2/32, 2606:4700:110:8a24:8971:6723:947c:eec4/128
DNS = 1.1.1.1, 1.0.0.1, 2606:4700:4700::1111, 2606:4700:4700::1001
MTU = 1280
[Peer]
PublicKey =
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = engage.cloudflareclient.com:2408
由于Gluetun的WIREGUARD_ENDPOINT_IP不支持域名,所以这里还要把engage.cloudflareclient.com换成IP才能连接,我不知道IP是多少,所以PING一下= =
ping -4 engage.cloudflareclient.com
PING engage.cloudflareclient.com (162.159.192.1) 56(84) bytes of data.
64 bytes from 162.159.192.1: icmp_seq=1 ttl=58 time=1.60 ms
64 bytes from 162.159.192.1: icmp_seq=2 ttl=58 time=1.60 ms
新建compose文件:
mkdir -p /opt/rustatio && cd /opt/rustatio && nano docker-compose.yml
写入如下内容:
services:
gluetun:
image: qmcgaw/gluetun
restart: unless-stopped
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
environment:
- VPN_SERVICE_PROVIDER=custom
- VPN_TYPE=wireguard
- WIREGUARD_ENDPOINT_IP=162.159.192.1
- WIREGUARD_ENDPOINT_PORT=2408
- WIREGUARD_PUBLIC_KEY=
- WIREGUARD_PRIVATE_KEY=
- WIREGUARD_ADDRESSES=10.64.1.89/32
- HTTP_CONTROL_SERVER_AUTH_DEFAULT_ROLE={"auth":"none"}
ports:
- "30080:8080" # Rustatio Web UI
rustatio:
image: ghcr.io/takitsu21/rustatio:latest
container_name: rustatio
restart: unless-stopped
network_mode: service:gluetun
depends_on:
gluetun:
condition: service_healthy
environment:
- PORT=8080
- RUST_LOG=${RUST_LOG:-info}
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
- AUTH_TOKEN=adminpasswd
# Optional: Watch folder configuration (auto-detected if volume is mounted)
- WATCH_AUTO_START=false # Set to true to auto-start faking new torrents
volumes:
- rustatio_data:/data
# Optional: Uncomment to enable watch folder feature
# - ${TORRENTS_DIR:-./path/to/your/torrents}:/torrents
volumes:
rustatio_data:
注意事项:
1.在Gluetun中使用HTTP_CONTROL_SERVER_AUTH_DEFAULT_ROLE={"auth":"none"}是不安全的配置,但是我们没有暴露控制服务器的端口,所以这里没有影响,并且这样配置是迫不得已的,因为Rustatio目前不支持通过身份验证访问Gluetun的API。
2.设置Rustatio Web UI的访问密码请修改AUTH_TOKEN=。
启动:
docker compose up -d
访问IP:30080,输入AUTH_TOKEN=的值登录:
如果Gluetun工作正常,则这里应该会显示VPN的IP:
效果:
Gluetun的玩法其实很多,很多老外都是把Gluetun和qBittorrent配合起来一起使用,下BT可以避免DMCA等问题。Gluetun还内置了一个shadowsocks,当然本文没有配置这些,有兴趣可以自己折腾。至于Rustatio,这里再次强调:本文只是分享信息,此工具仅供教育用途。篡改BitTorrent追踪器的上传/下载统计数据可能违反私有追踪器的服务条款,并可能导致账户被暂停或封禁。使用风险自负。
参考:
https://github.com/qdm12/gluetun-wiki/blob/main/setup/providers/custom.md
https://github.com/qdm12/gluetun-wiki/blob/main/setup/advanced/control-server.md
Haven目前还是一个很新的项目,正在大力开发中,但功能其实已经非常完善了。我部署测试了一下,个人觉得比较亮眼的功能有:
看下效果,Channel(群聊):
DM(私聊):
以下是部署步骤,安装Docker:
apt -y update
apt -y install curl
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
新建compose文件:
mkdir -p /opt/haven && cd /opt/haven && nano docker-compose.yml
写入如下内容:
services:
haven:
image: ghcr.io/ancsemi/haven:latest
container_name: haven
restart: unless-stopped
env_file: .env
ports:
- "61005:61005"
volumes:
- ./haven-data:/data
turn:
container_name: coturn-server
image: docker.io/coturn/coturn
restart: unless-stopped
network_mode: "host"
volumes:
- ./coturn.conf:/etc/coturn/turnserver.conf
创建coturn配置文件:
nano coturn.conf
写入如下内容:
use-auth-secret
static-auth-secret=E6Hk8mbrQobSp49Slgen6cYNNGuYx8NjVXh7GV8gWebNQMXY8xUTMjMjnSiMxu32 # 配置coturn密钥
realm=coturn.example.com # 配置你的域名
fingerprint
min-port=61200
max-port=61500
创建.env文件:
nano .env
写入如下内容:
# Server Config
PORT=61005
HOST=0.0.0.0
# Display name for this server (shown in multi-server sidebar)
SERVER_NAME=Haven
# Secret key for JWT tokens — CHANGE THIS to a long random string!
# (Auto-generated on first boot if left as default)
JWT_SECRET=G1hp3yQW4ZNPz1aMNwXwC8wOUFPlQBS0UVdx0BjOSg5NGFtBzSuiHPdcOju4QuPt
# Your admin username (register with this name first to get admin powers)
ADMIN_USERNAME=imlala
# Optional: HTTPS (required for voice chat over the internet)
# Paths are relative to the data directory:
# Windows: %APPDATA%\Haven\
# Linux/macOS: ~/.haven/
# SSL_CERT_PATH=./certs/cert.pem
# SSL_KEY_PATH=./certs/key.pem
# Force HTTP mode (useful behind a reverse proxy like Caddy, nginx, etc.)
# Set to true to skip SSL even if certificates exist.
FORCE_HTTP=true
# Optional: Override the data directory location
# HAVEN_DATA_DIR=
# Optional: TURN server for voice/screen sharing across the internet.
# Without TURN, voice only works on the same network (LAN).
# Recommended: run coturn on the same box or a cheap VPS.
#
# Option A — Shared secret (coturn --use-auth-secret, recommended):
TURN_URL=turn:coturn.example.com:3478
TURN_SECRET=E6Hk8mbrQobSp49Slgen6cYNNGuYx8NjVXh7GV8gWebNQMXY8xUTMjMjnSiMxu32
#
# Option B — Static credentials:
# TURN_URL=turn:your-server.com:3478
# TURN_USERNAME=haven
# TURN_PASSWORD=your-password
# Optional: GIPHY API key (free — get one at https://developers.giphy.com/dashboard/)
# GIPHY_API_KEY=
启动haven和coturn:
docker compose up -d
配置Ferron反向代理:
nano /etc/ferron.kdl
写入如下内容:
haven.example.com {
proxy "http://127.0.0.1:61005/"
proxy_request_header_replace "Host" "{header:Host}"
}
重载Ferron:
systemctl reload ferron
如果正常,现在访问haven.example.com可以浏览到登录与注册的页面:
请注意注册的用户名,务必与之前在.env内设置的ADMIN_USERNAME=一致,只有匹配到这个用户名,你注册的账号才拥有管理员权限。
为什么把易于安装给划掉?因为这个项目我认为就目前而言是非常难以安装的,官方没有提供生产部署的文档,且部署步骤太多太杂了。。我认为这与官方的宣传不符,所以给划掉了= =
这里我为了尝鲜体验一番,记录一下用docker compose部署的方式。如果后续官方写了部署文档,请以官方的文档为准。这个项目目前应该还是Beta状态,如果官方后续对项目做了比较大的改动,不保证这篇文章的部署方法一直有效。
准备工作:
1.一台Debian服务器,内存不低于4GB,放行80、443、9199、9299、9980端口。
2.添加域名A记录解析,本文示例域名:drive.example.com、onlyoffice.example.com、collabora.example.com。
3.部署RustFS对象存储服务,或Garage S3对象存储服务。推荐RustFS,因为Drive的文档编辑功能需要S3支持“版本控制”的功能,Garage S3缺少这个功能将导致文档无法保存。本文S3示例域名:rustfs.example.com
4.部署VoidAuth OIDC身份验证服务。本文示例域名:voidauth.example.com
在VoidAuth创建OIDC APP:
设置Redirect URLs:
https://drive.example.com/api/v1.0/callback/
设置PostLogout URL:
https://drive.example.com/api/v1.0/logout-callback/
在RustFS控制台创建存储桶,并启用版本控制:
做好上面的准备工作后,现在可以正式部署Drive了,在服务器内安装Docker:
apt -y update apt -y install curl curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh
新建compose文件:
mkdir -p /opt/drive && cd /opt/drive && nano docker-compose.yml
写入如下内容,需要修改的地方写了注释:
x-common-env: &common-env
BACKEND_HOST: backend
FRONTEND_HOST: frontend
# Django
DJANGO_ALLOWED_HOSTS: "*"
DJANGO_SECRET_KEY: 7cafccee9288097a2b3eb64305b0d07bff6682bc50a2b3f99a9696ef2e000fab # 使用openssl rand -hex 32生成
DJANGO_SETTINGS_MODULE: drive.settings
DJANGO_CONFIGURATION: Production
# Logging
LOGGING_LEVEL_HANDLERS_CONSOLE: ERROR
LOGGING_LEVEL_LOGGERS_ROOT: INFO
LOGGING_LEVEL_LOGGERS_APP: INFO
# Python
PYTHONPATH: /app
# Mail
DJANGO_EMAIL_HOST: mail.example.com
DJANGO_EMAIL_HOST_USER: smtp
DJANGO_EMAIL_HOST_PASSWORD: smtppassword
DJANGO_EMAIL_PORT: 587
DJANGO_EMAIL_FROM: [email protected]
DJANGO_EMAIL_USE_TLS: true
DJANGO_EMAIL_BRAND_NAME: "La Suite Numérique"
DJANGO_EMAIL_LOGO_IMG: "https://drive.example.com/assets/logo-suite-numerique.png"
# Media S3
STORAGES_STATICFILES_BACKEND: django.contrib.staticfiles.storage.StaticFilesStorage
AWS_S3_ACCESS_KEY_ID: imlala
AWS_S3_SECRET_ACCESS_KEY: s3secretaccesskey
AWS_S3_REGION_NAME: auto
AWS_STORAGE_BUCKET_NAME: drive-media-storage
AWS_S3_SIGNATURE_VERSION: s3v4
AWS_S3_ENDPOINT_URL: https://rustfs.example.com
MEDIA_BASE_URL: https://drive.example.com
# OIDC
OIDC_OP_JWKS_ENDPOINT: https://voidauth.example.com/oidc/jwks
OIDC_OP_AUTHORIZATION_ENDPOINT: https://voidauth.example.com/oidc/auth
OIDC_OP_TOKEN_ENDPOINT: https://voidauth.example.com/oidc/token
OIDC_OP_USER_ENDPOINT: https://voidauth.example.com/oidc/me
OIDC_OP_LOGOUT_ENDPOINT: https://voidauth.example.com/oidc/session/end
OIDC_RP_CLIENT_ID: pcl1xbprEdcbi0xQ
OIDC_RP_CLIENT_SECRET: youroidcsecret
OIDC_RP_SIGN_ALGO: RS256
OIDC_RP_SCOPES: "openid email"
LOGIN_REDIRECT_URL: https://drive.example.com
LOGIN_REDIRECT_URL_FAILURE: https://drive.example.com
LOGOUT_REDIRECT_URL: https://drive.example.com
OIDC_REDIRECT_ALLOWED_HOSTS: '["https://drive.example.com"]'
# WOPI
WOPI_CLIENTS: "collabora,onlyoffice"
WOPI_COLLABORA_DISCOVERY_URL: https://collabora.example.com/hosting/discovery
WOPI_ONLYOFFICE_DISCOVERY_URL: https://onlyoffice.example.com/hosting/discovery
WOPI_SRC_BASE_URL: https://drive.example.com
x-postgres-env: &postgres-env
# Postgresql db container configuration
POSTGRES_DB: drive
POSTGRES_USER: drive
POSTGRES_PASSWORD: 023917148a35e7c9a62964caabe0334c # 设置数据库密码
# App database configuration
DB_HOST: postgresql
DB_NAME: drive
DB_USER: drive
DB_PASSWORD: 023917148a35e7c9a62964caabe0334c # 设置数据库密码
DB_PORT: 5432
services:
postgresql:
image: postgres:16
restart: unless-stopped
environment:
<<: *postgres-env
volumes:
- ./data/databases/backend:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
interval: 1s
timeout: 2s
retries: 300
redis:
image: redis:5
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 5s
retries: 5
start_period: 10s
backend:
image: lasuite/drive-backend:main
restart: unless-stopped
user: "${DOCKER_USER:-1000}"
depends_on:
postgresql:
condition: service_healthy
restart: true
redis:
condition: service_healthy
environment:
<<: [*common-env, *postgres-env]
frontend:
image: lasuite/drive-frontend:main
user: "${DOCKER_USER:-1000}"
restart: unless-stopped
depends_on:
- backend
environment:
<<: [*common-env]
ports:
- 127.0.0.1:9199:8083
volumes:
- ./default.conf.template:/etc/nginx/templates/docs.conf.template
entrypoint:
- /docker-entrypoint.sh
command: ["nginx", "-g", "daemon off;"]
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080"]
interval: 15s
timeout: 30s
retries: 20
start_period: 10s
celery:
image: lasuite/drive-backend:main
user: ${DOCKER_USER:-1000}
restart: unless-stopped
environment:
<<: [*common-env, *postgres-env]
command: [ "celery", "-A", "drive.celery_app", "worker", "-l", "INFO" ]
onlyoffice-docs:
image: onlyoffice/documentserver:latest
container_name: onlyoffice-docs
restart: unless-stopped
environment:
- WOPI_ENABLED=true
- JWT_ENABLED=true
- JWT_SECRET=be1b32d03c4397d2cabffa3acef7450b # 设置JWT密钥
ports:
- "127.0.0.1:9299:80"
volumes:
- ./onlyoffice/logs:/var/log/onlyoffice
- ./onlyoffice/data:/var/www/onlyoffice/Data
- ./onlyoffice/lib:/var/lib/onlyoffice
- ./onlyoffice/db:/var/lib/postgresql
collabora:
image: collabora/code:latest
container_name: collabora-code
restart: unless-stopped
ports:
- "127.0.0.1:9980:9980"
environment:
- server_name=collabora.example.com
- username=admin
- password=97bfec0e41e788611c636024fa5bd4bb # 设置collabora管理员密码
- extra_params=--o:ssl.enable=false --o:ssl.termination=true
注意事项:
如果你只想使用一个办公套件,例如onlyoffice,那么请修改如下变量:
WOPI_CLIENTS: "onlyoffice" #WOPI_COLLABORA_DISCOVERY_URL: https://collabora.example.com/hosting/discovery #注释掉这个变量
Drive的WOPI在默认情况下会使用onlyoffice,如果你不需要collabora code可以不部署,并且onlyoffice的使用体验比collabora code好的多。
新建前端容器需要用到的NGINX配置文件:
nano default.conf.template
写入如下内容:
upstream docs_backend {
server ${BACKEND_HOST}:8000 fail_timeout=0;
}
upstream docs_frontend {
server ${FRONTEND_HOST}:3000 fail_timeout=0;
}
server {
listen 8083;
server_name localhost;
charset utf-8;
# increase max upload size
client_max_body_size 5120m;
server_tokens off;
proxy_ssl_server_name on;
location @proxy_to_docs_backend {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://docs_backend;
}
location @proxy_to_docs_frontend {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://docs_frontend;
}
location / {
try_files $uri @proxy_to_docs_frontend;
}
location /api {
try_files $uri @proxy_to_docs_backend;
}
location /admin {
try_files $uri @proxy_to_docs_backend;
}
location /media/ {
# Auth request configuration
auth_request /media-auth;
auth_request_set $authHeader $upstream_http_authorization;
auth_request_set $authDate $upstream_http_x_amz_date;
auth_request_set $authContentSha256 $upstream_http_x_amz_content_sha256;
# Pass specific headers from the auth response
proxy_set_header Authorization $authHeader;
proxy_set_header X-Amz-Date $authDate;
proxy_set_header X-Amz-Content-SHA256 $authContentSha256;
proxy_pass https://rustfs.example.com/drive-media-storage/;
proxy_set_header Host rustfs.example.com;
proxy_ssl_name rustfs.example.com;
}
# Proxy auth for media-preview
location /media/preview/ {
# Auth request configuration
auth_request /media-auth;
auth_request_set $authHeader $upstream_http_authorization;
auth_request_set $authDate $upstream_http_x_amz_date;
auth_request_set $authContentSha256 $upstream_http_x_amz_content_sha256;
# Pass specific headers from the auth response
proxy_set_header Authorization $authHeader;
proxy_set_header X-Amz-Date $authDate;
proxy_set_header X-Amz-Content-SHA256 $authContentSha256;
proxy_pass https://rustfs.example.com/drive-media-storage/;
proxy_set_header Host rustfs.example.com;
proxy_ssl_name rustfs.example.com;
}
location /media-auth {
proxy_pass http://docs_backend/api/v1.0/items/media-auth/;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Original-URL $request_uri;
# Prevent the body from being passed
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-Method $request_method;
}
}
注意将location /media/和location /media/preview/内的域名改为自己的S3域名:
proxy_pass https://rustfs.example.com/drive-media-storage/; proxy_set_header Host rustfs.example.com; proxy_ssl_name rustfs.example.com;
启动全部容器:
docker compose up -d
运行数据库迁移、创建Django管理员账号:
docker compose run --rm backend python manage.py migrate docker compose run --rm backend python manage.py createsuperuser --email [email protected] --password youradminpassword
运行如下命令启用WOPI客户端:
docker compose run --rm backend python manage.py trigger_wopi_configuration
这应该是一个BUG,按道理来说celery会自动执行,但是不知道为什么没生效。如果不执行这个命令将无法使用文档编辑功能。
配置Ferron反向代理:
nano /etc/ferron.kdl
写入如下内容:
drive.example.com {
proxy "http://127.0.0.1:9199/"
proxy_request_header_replace "Host" "{header:Host}"
}
onlyoffice.example.com {
proxy "http://127.0.0.1:9299/"
proxy_request_header_replace "Host" "{header:Host}"
}
collabora.example.com {
proxy "http://127.0.0.1:9980/"
proxy_request_header_replace "Host" "{header:Host}"
disable_url_sanitizer #true
}
重载Ferron:
systemctl reload ferron
如遇到Ferron反向代理导致上传速度慢,可以参考这篇文章解决。
所有服务将在以下URL提供访问:
https://drive.example.com https://onlyoffice.example.com/admin https://collabora.example.com/browser/dist/admin/admin.html
效果:
总结一下部署过程中遇到的问题。
1.由于官方的环境变量配置文件不完整,导致部署非常困难,甚至有些变量还要翻源码才能知道是什么作用= =
2.前端容器必须配置入口点启动脚本,否则无法使用挂载的自定义NGINX配置文件。
3.官方提供的NGINX配置文件缺少对/media/preview/的路由,导致上传的文件无法预览。
4.Ferron反向代理collabora code遇到WebSocket连接失败。原因是Ferron默认会过滤URL防止路径遍历等安全漏洞,collabora code的WebSocket连接URL里面包含https://被Ferron过滤成https:/了,导致后端报错Bad URL。解决办法是关闭Ferron的URL过滤:disable_url_sanitizer #true
5.S3实现不支持版本控制,导致文档无法保存,这个之前已经提过了。
6.S3跨域问题,因为Drive是直接操作S3,没有经过后端,所以S3这边要设置跨域规则。
7.目前需要手动执行python manage.py trigger_wopi_configuration来集成onlyoffice等办公套件,这是一个BUG:见此issue
总之就是这个项目完成度还是非常高的,基本功能都有了,要我说就是还差个文件缩略图和密码分享的功能。但是有一说一部署确实太多坑了,建议再观望一下= =
]]>还有一个重要原因是,RustFS可谓是历经千辛万苦,终于把这个BUG给修好了= =
RustFS和MinIO在部署和使用方面都非常相似,如果你用过MinIO,那么上手RustFS会很容易,比Garage简单的多~
准备工作:
安装Docker:
apt -y update
apt -y install curl
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
新建compose文件:
mkdir -p /opt/rustfs && cd /opt/rustfs && nano docker-compose.yml
写入如下内容:
services:
rustfs:
image: rustfs/rustfs:latest
container_name: rustfs-server
restart: unless-stopped
security_opt:
- "no-new-privileges:true"
ports:
- "127.0.0.1:9000:9000" # S3 API
- "127.0.0.1:9001:9001" # 控制台
environment:
- RUSTFS_SERVER_DOMAINS=rustfs.example.com
- RUSTFS_VOLUMES=/data/rustfs0
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CORS_ALLOWED_ORIGINS=*
- RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS=*
- RUSTFS_ACCESS_KEY=imlala
- RUSTFS_SECRET_KEY=setyoursecretkey
- RUSTFS_REGION=auto
- RUSTFS_OBS_LOGGER_LEVEL=info
- RUSTFS_OBS_LOG_DIRECTORY=/app/logs
volumes:
- ./data:/data
- ./logs:/app/logs
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://localhost:9000/health && curl -f http://localhost:9001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
启动:
docker compose up -d
配置Ferron反向代理:
nano /etc/ferron.kdl
写入如下内容:
rustfs.example.com {
proxy "http://127.0.0.1:9000/"
proxy_request_header_replace "Host" "{header:Host}"
}
rustfs-console.example.com {
proxy "http://127.0.0.1:9001/"
proxy_request_header_replace "Host" "{header:Host}"
}
重载Ferron:
systemctl reload ferron
如遇到Ferron反向代理导致上传速度慢,可以参考这篇文章解决。
控制台URL:https://rustfs-console.example.com
登录账号:你设置的RUSTFS_ACCESS_KEY值
登录密钥:你设置的RUSTFS_SECRET_KEY值
简单说一下虚拟主机风格(Virtual Host Style)的存储桶。我们之前已经配置了RUSTFS_SERVER_DOMAINS环境变量,域名为:rustfs.example.com,那么虚拟主机风格存储桶的命名规则应该是这样的:
bucket0.rustfs.example.com
bucket1.rustfs.example.com
bucket2.rustfs.example.com
...
实战演示一下,在RustFS控制台创建一个桶,例如名字为:photo-rustfs
编辑Ferron配置文件:
nano /etc/ferron.kdl
将之前的配置修改为如下内容:
rustfs.example.com,*.rustfs.example.com {
proxy "http://127.0.0.1:9000/"
proxy_request_header_replace "Host" "{header:Host}"
}
另外因为需要用到通配符域名证书,务必将Ferron申请证书的方式改为DNS-01,这里我配置的DNS服务商是CloudFlare,将yourkey修改成你在CloudFlare申请的key:
* {
...
// auto_tls_challenge "http-01"
auto_tls_challenge "dns-01" provider="cloudflare" api_key="yourkey"
...
}
重载Ferron:
systemctl reload ferron
这个桶现在的URL应该是:
photo-rustfs.rustfs.example.com
将桶的访问策略设置为公有,然后上传文件进行测试,如果正常应该可以访问到上传的文件:
参考:
https://docs.rustfs.com/integration/virtual.html
https://github.com/rustfs/rustfs/blob/main/docker-compose.yml
特点:
这个项目也是法国政府主导的,引用下面这段话:
On the 25th of January 2026, David Amiel, France’s Minister for Civil Service and State Reform, announced the full deployment of Visio—the French government’s dedicated Meet platform—to all public servants
这哥们打算让法国所有公务员都用这个开会= =这么牛b的程序(其实牛b的是LiveKit)必须部署逝一下。。。
本文根据官方的文档编写,主要记录并解决目前官方部署文档内的一些错误。
准备工作:
1.准备一个域名做好解析:meet.example.com(主程序),livekit.example.com(livekit服务)
2.部署VoidAuth OIDC身份验证服务,示例域名:voidauth.example.com
3.服务器的这些端口不能被其他程序占用:80、443、7880、7881、7882(UDP)、8086
在VoidAuth创建OIDC APP:
请将Redirect URLs配置为,注意URL最后的/,不要少了这个/:
https://meet.example.com/api/v1.0/callback/
请将PostLogout URL配置为,注意URL最后的/,不要少了这个/:
https://meet.example.com/api/v1.0/logout-callback/
做好上述准备工作后,现在就可以来部署Meet了,安装Docker:
apt -y update apt -y install curl curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh
下载compose文件以及需要用到的环境变量配置文件、LiveKit配置文件、NGINX配置文件:
cd /opt mkdir -p meet/env.d && cd meet curl -o compose.yaml https://raw.githubusercontent.com/suitenumerique/meet/refs/heads/main/docs/examples/compose/compose.yaml curl -o .env https://raw.githubusercontent.com/suitenumerique/meet/refs/heads/main/env.d/production.dist/hosts curl -o env.d/common https://raw.githubusercontent.com/suitenumerique/meet/refs/heads/main/env.d/production.dist/common curl -o env.d/postgresql https://raw.githubusercontent.com/suitenumerique/meet/refs/heads/main/env.d/production.dist/postgresql curl -o livekit-server.yaml https://raw.githubusercontent.com/suitenumerique/meet/refs/heads/main/docs/examples/livekit/server.yaml curl -o default.conf.template https://raw.githubusercontent.com/suitenumerique/meet/refs/heads/main/docker/files/production/default.conf.template
为了后续部署工作的顺利进行,这里先解决一些目前官方文档里面的错误,编辑compose文件:
nano compose.yaml
后端容器设置的env配置文件是错的,根本没有backend这个文件,取而代之的是.env文件,所以这里注释掉不存在的backend文件,添加.env文件:
...
backend:
image: lasuite/meet-backend:latest
...
restart: always
env_file:
- env.d/common
# - env.d/backend
- .env
- env.d/postgresql
...
前端容器又缺少.env文件,这里加上,同时将前端容器的8083端口暴露出来(不是官方文档说的8086端口)方便后续配置反向代理:
...
frontend:
image: lasuite/meet-frontend:latest
...
env_file:
- .env
- env.d/common
...
ports:
- "127.0.0.1:8086:8083"
将LiveKit容器的7880端口暴露出来,这是Likvkit的“信号”端口,是Meet与Livekit通信的关键:
...
livekit:
image: livekit/livekit-server:latest
command: --config /config.yaml
ports:
- 127.0.0.1:7880:7880
- 7881:7881/tcp
- 7882:7882/udp
...
编辑供前端容器内部使用的NGINX配置文件:
nano default.conf.template
将这两个错误的环境变量:BACKEND_HOST/FRONTEND_HOST修改为如下内容:
upstream meet_backend {
server ${BACKEND_INTERNAL_HOST}:8000 fail_timeout=0;
}
upstream meet_frontend {
server ${FRONTEND_INTERNAL_HOST}:8080 fail_timeout=0;
}
编辑.env文件:
nano .env
这里只列出需要修改的内容:
MEET_HOST=meet.example.com #KEYCLOAK_HOST=id.domain.tld # 注释掉这个配置,我们不使用KEYCLOAK LIVEKIT_HOST=livekit.example.com #REALM_NAME=meet # 注释掉这个配置,我们不使用KEYCLOAK
编辑common文件:
nano env.d/common
这里只列出需要修改的内容:
# Django DJANGO_SECRET_KEY= # 使用openssl rand -hex 32生成 # Mail DJANGO_EMAIL_HOST=mail.example.com DJANGO_EMAIL_HOST_USER=smtp DJANGO_EMAIL_HOST_PASSWORD=smtppassword DJANGO_EMAIL_PORT=587 [email protected] DJANGO_EMAIL_USE_TLS=true # OIDC OIDC_OP_JWKS_ENDPOINT=https://voidauth.example.com/oidc/jwks OIDC_OP_AUTHORIZATION_ENDPOINT=https://voidauth.example.com/oidc/auth OIDC_OP_TOKEN_ENDPOINT=https://voidauth.example.com/oidc/token OIDC_OP_USER_ENDPOINT=https://voidauth.example.com/oidc/me OIDC_OP_LOGOUT_ENDPOINT=https://voidauth.example.com/oidc/session/end OIDC_RP_CLIENT_ID= OIDC_RP_CLIENT_SECRET= # Livekit Token settings LIVEKIT_API_SECRET= # 使用openssl rand -hex 32生成
编辑postgresql文件,设置PostgreSQL数据库密码:
nano env.d/postgresql
这里只列出需要修改的内容:
DB_PASSWORD=setyourdbpassword # 设置你的数据库密码。
编辑livekit-server.yaml:
nano livekit-server.yaml
将keys的值设置成和LIVEKIT_API_SECRET相同的内容:
port: 7880 redis: address: redis:6379 keys: meet: # 把这里的值设置成和LIVEKIT_API_SECRET相同的内容 # WebRTC configuration rtc: # # when set, LiveKit will attempt to use a UDP mux so all UDP traffic goes through # # listed port(s). To maximize system performance, we recommend using a range of ports # # greater or equal to the number of vCPUs on the machine. # # port_range_start & end must not be set for this config to take effect udp_port: 7882 # when set, LiveKit enable WebRTC ICE over TCP when UDP isn't available # this port *cannot* be behind load balancer or TLS, and must be exposed on the node # WebRTC transports are encrypted and do not require additional encryption # only 80/443 on public IP are allowed if less than 1024 tcp_port: 7881 # use_external_ip should be set to true for most cloud environments where # the host has a public IP address, but is not exposed to the process. # LiveKit will attempt to use STUN to discover the true IP, and advertise # that IP with its clients use_external_ip: true
启动全部容器:
docker compos up -d
运行数据库迁移并创建Django管理员用户:
docker compose run --rm backend python manage.py migrate docker compose run --rm backend python manage.py createsuperuser --email [email protected] --password adminpassword
稍后你可以使用这个URL访问Docs的Django后台:meet.example.com/admin。在这个后台你可以管理用户创建的房间。
请注意这里创建的用户只拥有Django管理员权限,最终实际可供用户使用的账号必须使用OIDC创建。
配置Ferron反向代理:
nano /etc/ferron.kdl
写入如下内容:
meet.example.com {
proxy "http://127.0.0.1:8086/"
proxy_request_header_replace "Host" "{header:Host}"
}
livekit.example.com {
proxy "http://127.0.0.1:7880/"
proxy_request_header_replace "Host" "{header:Host}"
}
如果你使用的是别的反向代理,则必须为LiveKit配置WebSocket和长连接支持。Ferron默认支持这些,所以不必写在配置文件内。
重载Ferron:
systemctl reload ferron
效果:
看着这个经典的蓝白配色UI,还有这几个按钮的造型,然后还是法国出品,瞬间就觉得好似一位故人:OVH!
简单测试,在电脑上启用OBS虚拟摄像头,在手机上加入房间,看看双方能不能正常视频通话:
功能还是很全面的,共享浏览器屏幕、发表情、拍手、甚至还有虚拟背景和聊天功能。唯独在国内使用的话可能要注意连通性问题,国内的网络嘛,懂的都懂,如果你按照这篇文章部署后连不上,则可能还需要在LiveKit配置TURN服务。
]]>