认真的ricky https://liqimore.com Hey there~ Wed, 06 Aug 2025 14:15:40 +0000 zh-Hans hourly 1 https://wordpress.org/?v=6.8.3 在debian12下搭建GRE over IPSec隧道,使两台机器加密互联互通 https://liqimore.com/2025/gre-over-ipsec-on-debian12/ https://liqimore.com/2025/gre-over-ipsec-on-debian12/#respond Sun, 27 Jul 2025 13:25:00 +0000 https://beta.test.liqimore.com/?p=925 简述问题和目标

node A 公网 IP:[usVDS ip]
node A GRE IP:192.168.200.1/24

node B 公网 IP:[usWest ip]
node A GRE IP:192.168.200.2/24

系统环境:均为 debian 12

目标:node A 的私网 IP 可以与 node B 的私网 IP 加密互联。

我有一台美东 VDS 和美西的 VPS,其中美东的 VDS 性能强但是线路差,美西的 VPS 线路好,但是性能差。为了实现美东 VDS 跑后端服务,美西 VPS 作为出口,故决定使用 gre over ipsec 构建点对点隧道。

为什么不直接使用代理:性能差且我不希望流量未加密在公网跑(即使是 TLS 加密过的)
为什么不使用 wireguard:wireguard 使用 UDP,众所周知,UDP 被大多数国内外 ISP 强制 QOS。如果使用 udp2raw 转为 TCP 模式,那我认为着实没必要。

配置GRE隧道

启用 GRE 内核模块:

echo "ip_gre" >> /etc/modules
modprobe ip_gre

启用内核 ipv4转发参数:

echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.ipv4.conf.default.send_redirects = 0" >> /etc/sysctl.conf
echo "net.ipv4.conf.default.accept_redirects = 0" >> /etc/sysctl.conf
sysctl -p

配置GRE网络接口:

usVDS节点:

auto gre_to_west
iface gre_to_west inet tunnel
address 192.168.200.1
netmask 30
mode gre
endpoint [usWest public IP]
ttl 64

usWest节点:

auto gre_to_vds
iface gre_to_vds inet tunnel
address 192.168.200.2
netmask 30
mode gre
endpoint [usVDS public IP]
ttl 64

启用 GRE 网口:

放行 gre 协议:

# 在 usVDS 执行
ufw allow proto gre from [usWest IP] to [usVDS ip]

# 在 usWest 执行
ufw allow proto gre from [usVDS ip] to [usWest IP]

验证结果:

在 usWest 使用 tcpdump 抓取 gre 隧道的包:

显而易见,GRE 隧道是明文传输,在公网是极度不安全的,所以我们让 GRE 隧道构建在 ipsec 中。

配置 ipsec

安装 ipsec 相关工具并且初始化:

apt install -y libreswan

生成需要的 key,并且做记录

在 usWest、usVDS 上执行:

ipsec showhostkey --list
ipsec showhostkey --left --rsaid [keyid]

返回结果如下图所示:

在 usVDS 上创建 /etc/ipsec.d/ipsec.conf配置文件,并且填入下面的配置:

# /etc/ipsec.conf - Node A (IP: usVDS) IPsec 配置

config setup
    protostack=netkey   # 使用 netkey 协议栈

conn %default
    ikelifetime=60m
    keylife=20m
    rekeymargin=3m
    keyingtries=1
    authby=secret    # 使用预共享密钥 (PSK)

# 配置 GRE over IPsec 连接
conn gre-ipsec
    left=[usVDS ip]     # 本地端 IP 地址 (Node A)
    leftsubnet=192.168.200.0/30   # 只让 GRE 隧道子网的流量走 IPsec 隧道
    right=[usWest ip]  # 远程端 IP 地址 (Node B)
    rightsubnet=192.168.200.0/30  # 只让 GRE 隧道子网的流量走 IPsec 隧道
    ike=aes256-sha2_256    # IKE 协议配置
    esp=aes256-sha2_256    # ESP 协议配置
    auto=start             # 自动启动连接

在 usWest上创建 /etc/ipsec.d/ipsec.conf配置文件,并且填入下面的配置:

# /etc/ipsec.conf - Node B (IP: usWest) IPsec 配置

config setup
    protostack=netkey   # 使用 netkey 协议栈

conn %default
    ikelifetime=60m
    keylife=20m
    rekeymargin=3m
    keyingtries=1
    authby=secret    # 使用预共享密钥 (PSK)

# 配置 GRE over IPsec 连接
conn gre-ipsec
    left=[usWest ip]   # 本地端 IP 地址 (Node B)
    leftsubnet=192.168.200.0/30   # 只让 GRE 隧道子网的流量走 IPsec 隧道
    right=[usVDS ip]    # 远程端 IP 地址 (Node A)
    rightsubnet=192.168.200.0/30  # 只让 GRE 隧道子网的流量走 IPsec 隧道
    ike=aes256-sha2_256    # IKE 协议配置
    esp=aes256-sha2_256    # ESP 协议配置
    auto=start             # 自动启动连接

在 usWest 上的/etc/ipsec.secrets创建 secret 文件,填入下述内容:

# /etc/ipsec.secrets - Node B (IP: usWest) 预共享密钥配置

[usWest ip] [usVDS ip] : PSK "your_secret_key"

在 usWest 上的/etc/ipsec.secrets创建 secret 文件,填入下述内容:

# /etc/ipsec.secrets - Node A (IP: usVDS) 预共享密钥配置

[usVDS ip] [usWest ip] : PSK "your_secret_key"

放行防火墙,在 usVDS 执行:

# 允许从节点 B (usWest) 到节点 A (usVDS) 的 ESP 协议
ufw allow proto esp from [usWest ip] to [usVDS ip]

# 允许从节点 B (usWest) 到节点 A (usVDS) 的 UDP 500 端口 (IKE 协商)
ufw allow proto udp from [usWest ip] to [usVDS ip] port 500

# 允许从节点 B (usWest) 到节点 A (usVDS) 的 UDP 4500 端口 (NAT-T 保护 IPsec 流量)
ufw allow proto udp from [usWest ip] to [usVDS ip] port 4500

放行防火墙,在 usWest 执行:

# 允许从节点 A (usVDS) 到节点 B (usWest) 的 ESP 协议
sudo ufw allow proto esp from [usVDS ip] to [usWest ip]

# 允许从节点 A (usVDS) 到节点 B (usWest) 的 UDP 500 端口 (IKE 协商)
sudo ufw allow proto udp from [usVDS ip] to [usWest ip] port 500

# 允许从节点 A (usVDS) 到节点 B (usWest) 的 UDP 4500 端口 (NAT-T 保护 IPsec 流量)
sudo ufw allow proto udp from [usVDS ip] to [usWest ip] port 4500

在两个节点使用ipsec start启动 ipsec 服务,并且使用 systemctl status ipsec.serviceipsec verify 查看服务状态是否正常。

验证,如下图所示,gre 隧道是通的:

通过在对端抓包,发现此时已经不是明文传输,ICMP 的包被封装到了 ESP 包中

同理,再次监控 gre 接口,可以发现已经没有明文数据包了。

]]>
https://liqimore.com/2025/gre-over-ipsec-on-debian12/feed/ 0
腾讯云各款数据库不严谨测评(轻量数据库、腾讯云数据库、云原生数据库、自建 mysql) https://liqimore.com/2025/qcloud-database-sysbench/ https://liqimore.com/2025/qcloud-database-sysbench/#respond Sat, 01 Feb 2025 13:23:00 +0000 https://beta.test.liqimore.com/?p=915

由于机型使用时间受限,故测试只进行了 sysbench 的基准测试,后续可能会补充完整性能和功能测评。

腾讯云轻量数据库

1核心1G,上海实例测试

测试命令:

sysbench oltp_read_write --mysql-host=10.0.4.13 --mysql-port=3306 --mysql-user=root --mysql-password=Test@1234! --mysql-db=sbtest --tables=30 --table-size=100000 --threads=30 prepare

sysbench oltp_read_write --mysql-host=10.0.4.13 --mysql-port=3306 --mysql-user=root --mysql-password=Test@1234! --mysql-db=sbtest --tables=30 --table-size=100000 --threads=30 prewarm

sysbench oltp_read_write --mysql-host=10.0.4.13 --mysql-port=3306 --mysql-user=root --mysql-password=Test@1234! --mysql-db=sbtest --tables=30 --table-size=100000 --threads=16 --time=3600 --report-interval=10 run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 16
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 16 tps: 704.79 qps: 14123.44 (r/w/o: 9888.99/2823.27/1411.18) lat (ms,95%): 86.00 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 16 tps: 701.40 qps: 14016.82 (r/w/o: 9811.12/2802.90/1402.80) lat (ms,95%): 86.00 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 16 tps: 759.50 qps: 15198.90 (r/w/o: 10640.10/3039.80/1519.00) lat (ms,95%): 82.96 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 16 tps: 791.50 qps: 15827.80 (r/w/o: 11079.40/3165.40/1583.00) lat (ms,95%): 77.19 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 16 tps: 782.60 qps: 15651.67 (r/w/o: 10955.38/3131.19/1565.10) lat (ms,95%): 80.03 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 16 tps: 847.00 qps: 16942.40 (r/w/o: 11860.10/3388.20/1694.10) lat (ms,95%): 74.46 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            642390
        write:                           183540
        other:                           91770
        total:                           917700
    transactions:                        45885  (763.94 per sec.)
    queries:                             917700 (15278.75 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.0622s
    total number of events:              45885

Latency (ms):
         min:                                    4.53
         avg:                                   20.93
         max:                                   98.30
         95th percentile:                       82.96
         sum:                               960562.18

Threads fairness:
    events (avg/stddev):           2867.8125/235.27
    execution time (avg/stddev):   60.0351/0.01

在测试期间,CPU使用率达到了100%,内存最高位60%。

自建Mysql8.4.3

使用1panel容器化部署mysql8.4.3,按照如下参数配置:

机器为2核4G,SSD云硬盘,硅谷2区,CPU为 E5-26xx v4。

用同样的指令测试,性能数据如下:

sysbench oltp_read_write --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-user=sbtest --mysql-password=sbtest --mysql-db=sbtest --tables=30 --table-size=100000 --threads=8 --time=60 --report-interval=10 run

[ 10s ] thds: 8 tps: 139.47 qps: 2798.75 (r/w/o: 1960.75/558.27/279.74) lat (ms,95%): 90.78 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 8 tps: 130.20 qps: 2605.47 (r/w/o: 1823.85/521.21/260.41) lat (ms,95%): 108.68 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 8 tps: 134.00 qps: 2679.80 (r/w/o: 1876.40/535.40/268.00) lat (ms,95%): 97.55 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 8 tps: 141.70 qps: 2830.59 (r/w/o: 1980.40/566.80/283.40) lat (ms,95%): 92.42 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 8 tps: 130.50 qps: 2615.61 (r/w/o: 1831.71/522.90/261.00) lat (ms,95%): 106.75 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 8 tps: 141.00 qps: 2813.91 (r/w/o: 1969.11/562.90/281.90) lat (ms,95%): 94.10 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            114478
        write:                           32708
        other:                           16354
        total:                           163540
    transactions:                        8177   (136.05 per sec.)
    queries:                             163540 (2720.91 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.1031s
    total number of events:              8177

Latency (ms):
         min:                                   14.81
         avg:                                   58.75
         max:                                  217.94
         95th percentile:                       97.55
         sum:                               480394.56

Threads fairness:
    events (avg/stddev):           1022.1250/18.94
    execution time (avg/stddev):   60.0493/0.03

自建Mysql8.2.0

上海4C8G,上海2区,mysql8.2.0,CPU为Platinum 8255C CPU @ 2.50GHz。

测试:

sysbench oltp_read_write --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-user=sbtest --mysql-password=sbtest --mysql-db=sbtest --tables=30 --table-size=100000 --threads=32 --time=60 --report-interval=10 run

[ 10s ] thds: 32 tps: 509.10 qps: 10237.55 (r/w/o: 7171.23/2044.91/1021.41) lat (ms,95%): 102.97 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 32 tps: 520.41 qps: 10397.48 (r/w/o: 7278.60/2078.06/1040.83) lat (ms,95%): 101.13 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 32 tps: 509.90 qps: 10206.11 (r/w/o: 7142.04/2044.28/1019.79) lat (ms,95%): 104.84 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 32 tps: 517.00 qps: 10325.01 (r/w/o: 7230.91/2060.10/1034.00) lat (ms,95%): 99.33 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 32 tps: 539.80 qps: 10810.08 (r/w/o: 7566.66/2163.82/1079.61) lat (ms,95%): 99.33 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 32 tps: 527.20 qps: 10537.44 (r/w/o: 7376.66/2106.39/1054.39) lat (ms,95%): 99.33 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            437738
        write:                           125068
        other:                           62534
        total:                           625340
    transactions:                        31267  (520.25 per sec.)
    queries:                             625340 (10404.91 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.0989s
    total number of events:              31267

Latency (ms):
         min:                                   11.17
         avg:                                   61.45
         max:                                  203.92
         95th percentile:                      101.13
         sum:                              1921249.64

Threads fairness:
    events (avg/stddev):           977.0938/11.14
    execution time (avg/stddev):   60.0391/0.02

腾讯云数据库

sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 16
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 16 tps: 699.35 qps: 13999.17 (r/w/o: 9801.35/2797.72/1400.11) lat (ms,95%): 41.85 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 16 tps: 725.44 qps: 14513.60 (r/w/o: 10160.29/2902.44/1450.87) lat (ms,95%): 38.25 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 16 tps: 736.40 qps: 14722.51 (r/w/o: 10304.91/2944.70/1472.90) lat (ms,95%): 36.89 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 16 tps: 740.00 qps: 14801.58 (r/w/o: 10360.79/2960.80/1480.00) lat (ms,95%): 36.24 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 16 tps: 693.50 qps: 13880.42 (r/w/o: 9717.92/2775.40/1387.10) lat (ms,95%): 41.85 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 16 tps: 691.80 qps: 13826.74 (r/w/o: 9677.16/2766.09/1383.49) lat (ms,95%): 42.61 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            600348
        write:                           171528
        other:                           85764
        total:                           857640
    transactions:                        42882  (714.42 per sec.)
    queries:                             857640 (14288.31 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.0225s
    total number of events:              42882

Latency (ms):
         min:                                   15.65
         avg:                                   22.39
         max:                                   77.03
         95th percentile:                       39.65
         sum:                               960077.36

Threads fairness:
    events (avg/stddev):           2680.1250/268.27
    execution time (avg/stddev):   60.0048/0.01

腾讯云原生数据库

sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 16
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 16 tps: 476.03 qps: 9538.71 (r/w/o: 6680.95/1904.10/953.65) lat (ms,95%): 89.16 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 16 tps: 470.50 qps: 9412.82 (r/w/o: 6588.91/1882.90/941.00) lat (ms,95%): 89.16 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 16 tps: 477.80 qps: 9551.82 (r/w/o: 6685.41/1910.80/955.60) lat (ms,95%): 89.16 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 16 tps: 458.60 qps: 9177.38 (r/w/o: 6424.69/1835.50/917.20) lat (ms,95%): 89.16 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 16 tps: 468.40 qps: 9367.08 (r/w/o: 6556.48/1873.80/936.80) lat (ms,95%): 89.16 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 16 tps: 464.40 qps: 9284.60 (r/w/o: 6498.40/1857.40/928.80) lat (ms,95%): 89.16 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            394436
        write:                           112696
        other:                           56348
        total:                           563480
    transactions:                        28174  (469.06 per sec.)
    queries:                             563480 (9381.29 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.0628s
    total number of events:              28174

Latency (ms):
         min:                                    5.04
         avg:                                   34.10
         max:                                  186.47
         95th percentile:                       89.16
         sum:                               960731.19

Threads fairness:
    events (avg/stddev):           1760.8750/66.38
    execution time (avg/stddev):   60.0457/0.00

物理机自建 Mysql

pve虚拟化环境,8核32G,CPU 独享且绑定在一个 numa 核心上,宿主机开启超线程:

8 x Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz
32G

sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 52
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 52 tps: 790.21 qps: 15886.18 (r/w/o: 11127.42/3173.14/1585.62) lat (ms,95%): 132.49 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 52 tps: 761.04 qps: 15211.43 (r/w/o: 10649.11/3040.35/1521.97) lat (ms,95%): 144.97 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 52 tps: 786.61 qps: 15725.06 (r/w/o: 11003.28/3148.95/1572.83) lat (ms,95%): 134.90 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 52 tps: 754.30 qps: 15098.30 (r/w/o: 10571.07/3018.42/1508.81) lat (ms,95%): 142.39 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 52 tps: 752.69 qps: 15064.86 (r/w/o: 10544.71/3014.57/1505.59) lat (ms,95%): 155.80 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 52 tps: 777.30 qps: 15537.96 (r/w/o: 10878.57/3104.69/1554.70) lat (ms,95%): 137.35 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            647850
        write:                           185100
        other:                           92550
        total:                           925500
    transactions:                        46275  (769.91 per sec.)
    queries:                             925500 (15398.28 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.1025s
    total number of events:              46275

Latency (ms):
         min:                                   19.52
         avg:                                   67.46
         max:                                  461.28
         95th percentile:                      142.39
         sum:                              3121912.91

Threads fairness:
    events (avg/stddev):           889.9038/9.82
    execution time (avg/stddev):   60.0368/0.02
]]>
https://liqimore.com/2025/qcloud-database-sysbench/feed/ 0
tailscale derp中继搭建 https://liqimore.com/2024/tailscale-derp-config/ https://liqimore.com/2024/tailscale-derp-config/#respond Wed, 11 Dec 2024 13:21:00 +0000 https://beta.test.liqimore.com/?p=909 安装golang

去https://go.dev/dl/下载机器对应版本最新的goalng(如果无法下载多试几次,会解析到速度快的ip)

wget https://go.dev/dl/go1.23.4.linux-amd64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go*.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >>  ~/.profile
#检查
go version

#设置proxy
go env -w GOPROXY=https://goproxy.cn

安装DERP

go install tailscale.com/cmd/derper@main

安装supervisord并且编写配置 /etc/supervisord/supervisor.d/derp.ini

[program:derp]
command                 = /root/go/bin/derper --a :10008 -stun -stun-port 34780
directory               = /root
autorestart             = true
startsecs               = 3
stdout_logfile          = /var/log/derp.out.log
stderr_logfile          = /var/log/derp.err.log
stdout_logfile_maxbytes = 2MB
stderr_logfile_maxbytes = 2MB
user                    = root
priority                = 999
numprocs                = 1
process_name            = %(program_name)s_%(process_num)02d

配置反向代理、绑定域名且开放防火墙后,修改tailscale的ACL即可使用

]]>
https://liqimore.com/2024/tailscale-derp-config/feed/ 0
尝试恢复威联通QNAP(QTier)数据过程的记录 https://liqimore.com/2024/qnap-qtier-data-recover-linux/ https://liqimore.com/2024/qnap-qtier-data-recover-linux/#respond Sat, 20 Apr 2024 13:19:00 +0000 https://beta.test.liqimore.com/?p=891 写在前面的话

我的QNAP采用QTIER分层存储+raid1模式,NAS由于硬件原因无法开机且机器不在手里,故尝试挂载Linux下恢复和使用Windows版本R-Linux恢复。目前品牌NAS都基于Linux存储框架,定制了自己的内容,使得普通用户无法在其他操作系统下访问数据(ZFS除外,可以直接导入zpool)。

本文讨论恢复QNAP QTIER未加密存储池的方式,后续会有相应文章讨论品牌NAS的最佳存储实践。

失败的恢复过程 (建立对QNAP QTier结构的理解)

环境准备

创建虚拟机,直通sas卡,卡上面有qnap下使用两块HC550组的raid1

虚拟机开机后,通过ssh登录,之后查看这两个盘是否可以显示

在Ubuntu中安装mdadm和lvm2

检查现有信息

通过以下命令扫描现有raid阵列

sudo mdadm --examine --scan

检查现有raid结构

sudo cat /proc/mdstat

黄框内即为QNAP数据分区所组合的raid1,我们的数据也在里面,也就是每个盘的sdc3和sdb3分区

尝试恢复

通过如下命令使用mdadm启动单独的raid1分区

sudo mdadm -A -R /dev/md9 /dev/sdb3

之后通过如下命令挂载到新创建的目录下

sudo mkdir /mnt/qnap_recover
sudo mount /dev/md9 /mnt/qnap_recover/

至此,如果你的QNAP未开启QTIER,那么此时可以正常挂在ext4,并且成功恢复数据,但是我的QNAP开启了QTIER,同时QNAP定制的LVM模块无法在其他机器上安装,所以需要使用其他方法恢复数据。

  1. 使用QuTSCloud,威联通的云NAS系统来尝试重新导入存储池,进而恢复数据
  2. 使用专业数据恢复软件,这次尝试使用R-Linux

使用R-Linux恢复数据

由于在Linux下使用R-Linux需要X11转发等配置,所以本次使用Windows平台,同样把SAS卡直通到Windows虚拟机内,启动虚机。

在Windows虚拟机内可以看到正确识别了对应两个磁盘

软件下载扫描

下载安装好R-Linux后启动,即可看到磁盘内QNAP的用户数据分区,即之前的drbd(即使配置好了drbd分布式块存储服务,也无法直接读取,因为QNAP定制了LVM模块)

扫描对应分区

扫描整个驱动器,并且把扫描结果保存到文件,以免中途失败丢失结果

等待扫描结果

等待扫描结束,对于18T的HC550,需要20小时左右,请耐心等待

分析扫描文件

部分扫描结果如下

扫描出来原始文件名丢失,目录结构丢失,成功恢复的文件也无法打开(初步怀疑还是底层采用drbd的问题),所以还是需要原始QNAP的LVM来加载才可以正确获取相关文件。后续可以尝试使用QuTS Cloud导入现有存储池的方式恢复,由于时间原因和数据重要程度,本次不再做处理,欢迎讨论实践。

]]>
https://liqimore.com/2024/qnap-qtier-data-recover-linux/feed/ 0
WSL2 Ubuntu 22.04 换国内源, 阿里云、清华、中科大和163 https://liqimore.com/2023/wsl2-ubuntu-22-04-apt-source-list/ https://liqimore.com/2023/wsl2-ubuntu-22-04-apt-source-list/#respond Wed, 25 Oct 2023 02:31:17 +0000 https://liqimore.com/?p=835 如需其它系统源,建议直接去阿里巴巴开源镜像站,速度很快

https://developer.aliyun.com/mirror/

Ubuntu 22.04 阿里源

deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse

]]>
https://liqimore.com/2023/wsl2-ubuntu-22-04-apt-source-list/feed/ 0
解决openwrt作为主路由,使用easyconnect无法解析dns的问题 https://liqimore.com/2023/rfc1918-conflict-openwrt-easyconnect/ https://liqimore.com/2023/rfc1918-conflict-openwrt-easyconnect/#respond Sun, 11 Jun 2023 06:28:31 +0000 https://liqimore.com/?p=738 问题现象:

openwrt作为主路由拨号上网,此局域网内的主机使用easyconnect连接内网,发现内网ip可以ping通,但是无法使用域名访问内网站点,除非自己添加hosts

问题解决:

进入openwrt设置界面,网络->DHCP/DNC选项,去掉勾选“重绑定保护,丢弃 RFC1918 上行响应数据”,保存并且应用修改,之后便可正常通过域名解析访问内网地址。

什么是RFC1918

https://datatracker.ietf.org/doc/html/rfc1918

如果开启丢弃RFC1918上行数据包,即无法发送数据包到解析为内网地址的域名。

]]>
https://liqimore.com/2023/rfc1918-conflict-openwrt-easyconnect/feed/ 0
DAOS存储模型 https://liqimore.com/2022/daos-storage-engine-introduction/ https://liqimore.com/2022/daos-storage-engine-introduction/#respond Sun, 04 Sep 2022 12:29:15 +0000 https://liqimore.com/?p=705

总览:

一个POOL是指预分配的一些存储空间,这些存储空间会分布在多个target上,具体分配到每一个target的容量大小叫做pool分区(pool shard)。POOL的大小是在创建时期制定好的,它可以通过调整每个pool shard的大小来扩缩容,也可以添加更多的target到pool中进行扩容(添加更多的storage node)。 POOL在DAOS中提供了对的存储虚拟化,同时也是存储隔离和资源分配的单元。

一个pool可以包含有多个具有事务特性的对象存储,叫做容器(container)。每个容器都是一个私有的对象地址空间(private object address space),相对于存储在同一个pool中的其他容器相互隔离,同时可以做事务性的任务。容器是DAOS中快照数据管理的单元。DAOS对象(DAOS objects)存储在容器中,可以被分布在当前pool的任意的target上,这样就提高了性能和恢复能力。DAOS对象可以通过多种API方式访问,可以抽象表示结构化,半结构化和非结构化的数据。

系统结构:

POOL:

每个pool都有自己唯一的uuid(pool uuid),同时维护着一个持久化的,具有版本控制的表pool map,表中存储着这个pool的target成员信息。target成员是确定和一致的(definitive and consistent),每次成员的变更,都会按照顺序记录(sequentially numbered)。pool map中不仅有在线的target的列表(list of active targets),还包含着存储的拓扑结构(storage topology),以一颗树的形式存储,用来标识共享相同硬件的target。例如,树的第一级可以表示共享同一块主板的所有target,树的第二级可以表示共享同一个机架(rack)中的所有主板,那么树的第三级可以表示共享同一个cage的所有rack。

按照这个框架,有效的按照层次表示出了错误域(hierarchical fault domains),这样就可以避免把冗余的数据放在可能同时挂掉的target上。在任意时间,都可以把新的target添加到pool map中,也可以把挂掉的target移出pool map,并且,由于pool map是具有版本的(fully versioned),所以每次对pool map的修改都会分配一个唯一的序列,尤其是当移出target的时候。

pool shard是在持久化内存中的预留,它也可以通过在特定target的NVME盘上预分配一片空间,结合持久化内存(persistent memory)一起来使用。pool shard大小固定,当容量已满的时候,所有的操作会失败。它的当前用量可以随时查询,同时可以知道每种数据类型在pool shard中所占用的空间。

当target挂掉并且从pool map中移出的时候,pool中的冗余数据会被自动恢复上线。这个恢复过程叫做rebuild。rebuild过程会被记录在持久化内存的特殊日志里,来表示连锁错误(cascading failures)。当新的target添加到pool map后,数据会自动迁移到新的节点,以确保数据是平均分布在所有的target成员上的。添加新target的过程叫做空间重平衡(space rebalancing),这个过程同样使用一个独立的持久化日志,来防止中断或者系统重启。一个pool就是一组分布在不同存储节点(storage node)的target成员,数据和元数据随着target成员也分布在了不同的存储节点上,这样就获得了较好的横向扩展,多副本和纠删码能力,同时保证了数据的持久性和高可用(achieve horizontal scalability, and replicated or erasure-coded to ensure durability and availability)。

当创建一个pool的时候,我们需要指定一些系统属性,同时用户也可以定义自己的属性,也会被持久化存储。pool仅允许授权或认证过的应用连接(authenticated and authorized applications),可以使用多种安全框架加来做认证,pool是强制开启安全认证的。连接成功后,连接上下文会被返回到用户进程中(a connection context is returned to the application process)。

pool存储着多种类型的元数据,包括pool map,认证信息,用户属性,系统属性和重建日志(pool map, authentication and authorization information, user attributes, properties, and rebuild logs)。这些元数据都需要高级别的弹性能力(resiliency),因此,pool的元数据有多个副本,分别存储在了不同的错误域(distinct high-level fault domains)中。对于大量存储节点来说,只有一小部分运行pool元数据服务(pool metadata service),在有限个存储节点的情况下,DAOS可以使用一致性算法(consensus algorithm)来保证数据在有错误的情况下的一致性(consistency in the presence of faults),从而避免脑裂问题(split-brain syndrome)。

访问一个pool的时候,用户进程需要连接并且通过pool的安全认证,之后这个连接可以通过local2global()和global2local()共享给用户的相关进程。这样可以避免分布式任务同时获取元数据而导致的问题。当申请这个连接的进程断开连接时,那么这个连接就会被pool收回。

Container:

container表示pool中的一个对象地址空间(object address space),同样通过UUID来做标识。下图为container的类型以及用户的使用方式:

和pool一样,在创建容器的时候需要设置一些属性,来启用容器的不同功能,例如checksum。

访问一个容器的时候,用户应用必须首先连接到当前容器所在的池,然后打开这个容器。如果当前应用拥有访问这个容器的权限,那么就会返回一个容器句柄(container handle)。返回的容器句柄拥有授权用户应用内的任意进程访问当前容器和内容的能力,同样,打开当前容器的进程同样可以共享获得的容器句柄给属于它的进程,当容器关闭时,权限会被收回。

容器内的不同对象可以在target上拥有不同的数据模式和冗余类型。想要定义对象的数据模式,那么必须需要一些参数,比如动态或静态条,复制或纠删码(Dynamic or static striping, replication, or erasure code)。对象类(object class)预先为一组对象定义了公共的数据模式属性,每一个对象类都被分配了一个唯一的id,同时这个对象会和pool级别的模式相关联(associated with a given schema at the pool level)。

新的对象类可以通过模式配置(configurable schema)随时定义,但是定义好后就不可变(当这个模式的所有对象都被销毁后,可以修改)。下表是DAOS POOL预先定义好的对象类:

如下面的表格所示,每一个容器中的对象都是通过一个128位的对象地址(128-bit object address)来标识。高32位(high 32 bits)预留给了DAOS,用来存储一些内部元数据,例如对象类(object class)。剩余的96位是被用户所管理,同时剩余的96位在同一个容器中是唯一的,它可以被用来存储上层的元数据(used by upper layers of the stack to encode their metadata),DAOS API 提供了一个可伸缩的64位对象id分配器。被存储下来的对象id是完整的128位地址,而且仅供一次性使用,只与一种对象模式相关联(only a single object schema)。

object id structure

<———————————- 128 bits ———————————->

——————————————————————————–

|DAOS Internal Bits| Unique User Bits |

——————————————————————————–

<—- 32 bits —-><————————- 96 bits ————————->

容器是事务和版本控制的基本单元,所有对容器中对象的操作都会被DAOS库添加时间戳,这个时间戳叫做epoch。DAOS transaction API允许合并多个操作为一个原子操作,然后通过epoch ordering提供多版本并发控制(multi-version concurrency control)。所有被记录下来的版本更新,都可以定期聚合(aggregated),回收被重叠写入(overlapping writes)所占用的空间,同时降低元数据的复杂度。快照是放置在特定epoch上的永久引用,同时避免被聚合回收。

容器的元数据(list of snapshots, container open handles, object class, user attributes, properties, and others)存储在持久化内存里,通过一个独立的container metadata service服务维护,这个服务可以使用parent metadata pool service或者自己专用的复制引擎(maintained by a dedicated container metadata service that either uses the same replicated engine as the parent metadata pool service, or has its own engine)。在创建容器的时候可以指定这个配置。

访问方式和pool一样,前文已说过:

Like a pool, access to a container is controlled by the container handle. To acquire a valid handle, an application process must open the container and pass the security checks. This container handle may then be shared with other peer application processes via the container local2global() and global2local() operations.

Object:

To avoid scaling problems and overhead common to a traditional storage system, DAOS objects are intentionally simple. No default object metadata beyond the type and schema is provided. This means that the system does not maintain time, size, owner, permissions or even track openers. To achieve high availability and horizontal scalability, many object schemas (replication/erasure code, static/dynamic striping, and others) are provided. The schema framework is flexible and easily expandable to allow for new custom schema types in the future. The layout is generated algorithmically on object open from the object identifier and the pool map. End-to-end integrity is assured by protecting object data with checksums during network transfer and storage.

A DAOS object can be accessed through different APIs:

Multi-level key-array API is the native object interface with locality feature. The key is split into a distribution (dkey) and an attribute (akey) key. Both the dkey and akey can be of variable length and type (a string, an integer or even a complex data structure). All entries under the same dkey are guaranteed to be collocated on the same target. The value associated with akey can be either a single variable-length value that cannot be partially overwritten, or an array of fixed-length values. Both the akeys and dkeys support enumeration.

Key-value API provides a simple key and variable-length value interface. It supports the traditional put, get, remove and list operations.

Array API implements a one-dimensional array of fixed-size elements addressed by a 64-bit offset. A DAOS array supports arbitrary extent read, write and punch operations.

https://www.openfabrics.org/wp-content/uploads/2020-workshop-presentations/105.-DAOS_KCain_JLombardi_AOganezov_05Jun2020_Final.pdf

参考:https://docs.daos.io/v2.0/overview/storage/#storage-model

]]>
https://liqimore.com/2022/daos-storage-engine-introduction/feed/ 0
How to completely uninstall/remove cygwin from windows10 https://liqimore.com/2021/remove-cygwin-from-win10/ https://liqimore.com/2021/remove-cygwin-from-win10/#respond Tue, 17 Aug 2021 00:00:00 +0000 https://liqimore.com/2021/remove-cygwin-from-win10/ Steps for removal
  1. open powershell with administrator privilege.
  2. run takeown /f PATH_TO_CYGWIN /r /d y, for me, PATH_TO_CYGWIN = C:\ENVs\MinGW64.
  3. run icacls PATH_TO_CYGWIN /t /grant everyone:F
  4. run del PATH_TO_CYGWIN.
  5. check if the files are deleted and delete the residual files.
]]>
https://liqimore.com/2021/remove-cygwin-from-win10/feed/ 0
CMU 15-445/645 – Homework assignment #1 SQL https://liqimore.com/2021/cmu15445-database-system-homework1/ https://liqimore.com/2021/cmu15445-database-system-homework1/#respond Thu, 27 May 2021 00:00:00 +0000 https://liqimore.com/2021/CMU15445-database-system-homework1/ 0x00. Desciption

execute sql queies in SQLite in IMDB database.

0x01. Evn setup

I’m using Ubuntu20.04 VM for this homework. The VM is installed on VMware Workstation Pro 15. There’re several steps to prepare:

  1. install SQLite, sudo apt-get install sqlite3 libsqlite3-dev

  2. Download dataset, wget https://15445.courses.cs.cmu.edu/fall2019/files/imdb-cmudb2019.db.gz

  3. verify dataset, md5 imdb-cmudb2019.db.gz

  4. Unzip and execute it, gunzip imdb-cmudb2019.db.gz and sqlite3 imdb-cmudb2019.db

0x02. Q1_SAMPLE

Sample, skip.

0x03. Q2_UNCOMMON_TYPE

List the longest title of each type along with the runtime minutes.

select type, primary_title, max(runtime_minutes) FROM titles GROUP BY type ORDER BY type ASC, primary_title ASC;

output:

movie|Logistics|51420
short|Kuriocity|461
tvEpisode|Téléthon 2012|1800
tvMiniSeries|Kôya no yôjinbô|1755
tvMovie|ArtQuench Presents Spirit Art|2112
tvSeries|The Sharing Circle|8400
tvShort|Paul McCartney Backstage at Super Bowl XXXIX|60
tvSpecial|Katy Perry Live: Witness World Wide|5760
video|Midnight Movie Madness: 50 Movie Mega Pack|5135
videoGame|Flushy Fish VR: Just Squidding Around|1500

This is almost correct, however, in tvShort type, a tie exists.

sqlite> select title_id,type, runtime_minutes FROM titles WHERE type = 'tvShort'  ORDER BY runtime_minutes DESC LIMIT 5;
tt2292857|tvShort|60
tt5613498|tvShort|60
tt10622020|tvShort|53
tt0353158|tvShort|49
tt5452108|tvShort|48

Now the problem is the tie, I need to figure out a way to handle it(show both tuples when tie). So, I take the output of this sql as a temp table.

select type, primary_title, max(runtime_minutes) FROM titles GROUP BY type ORDER BY type ASC, primary_title ASC;

And then, join it with titles table, on the same runtime_minutes and type, which means it will give me:

if max without tie, the join will output the same tuple,
if max WITH tie, the join will output the tuples that has the same attributes* like the max, in this case, has the same runtime and type, which indicates it's a tie.

As for the answer:

select  titles.type, titles.primary_title, titles.runtime_minutes from titles 
join (select type, primary_title, max(runtime_minutes) as maxLength FROM titles GROUP BY type) as maxType
on maxType.maxLength = titles.runtime_minutes and maxType.type = titles.type
order by titles.type ASC, titles.primary_title ASC;
movie|Logistics|51420
short|Kuriocity|461
tvEpisode|Téléthon 2012|1800
tvMiniSeries|Kôya no yôjinbô|1755
tvMovie|ArtQuench Presents Spirit Art|2112
tvSeries|The Sharing Circle|8400
tvShort|Paul McCartney Backstage at Super Bowl XXXIX|60
tvShort|The People Next Door|60
tvSpecial|Katy Perry Live: Witness World Wide|5760
video|Midnight Movie Madness: 50 Movie Mega Pack|5135
videoGame|Flushy Fish VR: Just Squidding Around|1500

0x04. Q3_TV_VS_MOVIE

List all types of titles along with the number of associated titles.

SELECT type, count(distinct title_id) as number from titles GROUP BY type ORDER BY number;
tvShort|4075
videoGame|9044
tvSpecial|9107
tvMiniSeries|10291
tvMovie|45431
tvSeries|63631
video|90069
movie|197957
short|262038
tvEpisode|1603076

Answer from prof:

SELECT type, count(*) AS title_count FROM titles GROUP BY type ORDER BY title_count ASC;

So, at this time that I figured title_id is primary key and it’s unique, so distinct is redundant. It could be replaced by count(title_id) or simplily count(*).

0x05. Q4_OLD_IS_NOT_GOLD

Which decades saw the most number of titles getting premiered? List the number of titles in every decade. Like 2010s|2789741.

From what I could do, I got this:

SELECT CAST(premiered/10 AS INT)*10 as upTime, count(*) FROM titles WHERE premiered is not NULL GROUP BY upTime ORDER BY upTime DESC;v
2020|2492
2010|1050732
2000|494639
1990|211453
1980|119258
1970|99707
1960|75237
1950|39554
1940|10011
1930|11492
1920|13153
1910|26596
1900|9586
1890|2286
1880|22
1870|1

I have no idea how to append ‘s’ at the end of each decade. So, I look for the answer.

SELECT 
  CAST(premiered/10*10 AS TEXT) || 's' AS decade,
  COUNT(*) AS num_movies
  FROM titles
  WHERE premiered is not null
  GROUP BY decade
  ORDER BY num_movies DESC
  ;
2010s|1050732
2000s|494639
1990s|211453
1980s|119258
1970s|99707
1960s|75237
1950s|39554
1910s|26596
1920s|13153
1930s|11492
1940s|10011
1900s|9586
2020s|2492
1890s|2286
1880s|22
1870s|1

I cast the decades into INT instead of TEXT which could not use || to append someting else. And sort this by number of movies(I sorted by the wrong collum before) Now I change my sql to this:

SELECT CAST(premiered/10*10 AS TEXT) || 's' as upTime, count(*) as numbers FROM titles WHERE premiered is not NULL GROUP BY upTime ORDER BY numbers DESC;
2010s|1050732
2000s|494639
1990s|211453
1980s|119258
1970s|99707
1960s|75237
1950s|39554
1910s|26596
1920s|13153
1930s|11492
1940s|10011
1900s|9586
2020s|2492
1890s|2286
1880s|22
1870s|1

And now, the output is exactly the same.

0x06. Q5_PERCENTAGE

List the decades and the percentage of titles which premiered in the corresponding decade. Display like : 2010s|45.7042.

SELECT CAST(premiered/10*10 AS TEXT) || 's' as upTime, ROUND(CAST(count(*) AS FLOAD) / (SELECT COUNT(*) FROM titles) * 100, 4) as numbers 
FROM titles WHERE premiered is not NULL GROUP BY upTime ORDER BY numbers DESC;
2010s|45.7891
2000s|21.5555
1990s|9.2148
1980s|5.1971
1970s|4.3451
1960s|3.2787
1950s|1.7237
1910s|1.159
1920s|0.5732
1930s|0.5008
1940s|0.4363
1900s|0.4177
2020s|0.1086
1890s|0.0996
1880s|0.001
1870s|0.0

This is done based on the previous question, just cast the number to fload and divide. The answer cast it into real. They are both proximity data types, so in this question, there no difference.

If you need accurate data, numeric and decimal should be better.

]]>
https://liqimore.com/2021/cmu15445-database-system-homework1/feed/ 0
ECE9609 – Notes for Sudo Heap-based Buffer Overflow (CVE-2021-3156) https://liqimore.com/2021/ece9609-hacking-companion-notes2/ https://liqimore.com/2021/ece9609-hacking-companion-notes2/#respond Sat, 20 Mar 2021 00:00:00 +0000 https://liqimore.com/2021/ECE9609-hacking-companion-notes2/ Sudo Heap-based Buffer Overflow (CVE-2021-3156)

Background

Common Vulnerabilities & Exposures, so-called CVE, is a dictionary of system vulnerabilities that has been disclosed to the public. Normally, it consists of CVE-ID, a description, and a list of references. Specifically speaking, the CVE-ID specifies the identity of a particular CVE, the description field explains the detail of this CVE, and the references list all reports from each department that found this CVE. In this presentation, we are going to explore the latest CVE, CVE-2021-3156. It reported that the command “sudoedit -s” and any command that ends with a single backslash character will mistakenly promote the user’s permission as well as the root.

Methodology

2.1 Vulnerability abstract

According to what the report declared, if users who are not supposed to obtain the administrative-level permission find this vulnerability and successfully exploit it to elevate their privileges, they will be able to execute system commands which should only be executed by administrators or some particular users. take a one step further, it will then cause information leakage and malicious tampering.

2.2 Vulnerability analyzation

When applying sudo to run commands in “shell” mode, there are two options, which are -i and -s. sudo dash s option will set sudo’s MODE_SHELL flag, whereas sudo -i will set both MODE_SHELL and MODE_login_shell flag.

Here is the code segment from sudo.c’s main function, it invokes parse_args function, which parses command line input into appropriate data structures.

The next code segment determines whether Mode_shell with -s or -i is activated. If one of them is chosen, it will rewrite argv by adding a backslash to the meta-characters.

Then, as the next code segment shows, it will invoke set_cmnd() function. The set_cmnd function will first calculate the size of input via strlen() function, then invokes mallc() function to allocate a buffer which is named user_args, each has a size of size. last, it will determine whether Model_shell is activated, if yes, it will connect command line input and store it into user_args.

The next code segment is where it causes the problem. This is the code segment that set_cmnd() stores the command line input into user_args. For example, if we input sudo -s / 112233. According to the code, from[0] will be the backslash character, and from[1] is going to be the argument’s null terminator. The null terminator is not a space character and it is not visible here. null terminator has it’s all bits set to zero and it is used to represent the end of a string of characters. As it fulfills all requirements here, the array variable “from” is going to be incremented and point to the null terminator now. Then, as next from[0], this null terminator is going to be stored into the “user_args” buffer and array variable “from” is going to be incremented again and point to the next character after the null terminator, which will cause it to be out of the argument’s bounds and keeps doing it because it is inside a while loop. put it another way, since the size of the “user_args” buffer is calculated at the beginning of set_cmnd function and it has a limited size, but it keeps store out of bound characters into this buffer, it gives rise to a heap-based buffer overflow.

However, logically, if the Mode_shell or mode_login_shell is set, command line input cannot end with a backslash character because the set_cmnd function will determine whether model_shell, mode_edit, or mode_check is activated. If MODE_SHELL is activated, the parse-args function will then parse all command-line input, which will change single backslash to double backslash. Therefore, instead of sudo, attackers can execute sudo-edit. Sudo-edit will trigger sudo, but will not reset valid_flags and Mode_run. Therefore, attackers are able to skip the parsing part of the parse_args function, therefore, keep a single backslash, and finally trigger the heap_based overflow.

Online Video

We’ll demo the CVE during the class, however, here’s what we find great demo on the Internet. You could take it as a reference as ours.

First one:

Second one:

Reference

  1. https://www.youtube.com/watch?v=2_ZaNBl6qNo This is a very detailed explanation about the CVE, recommend to watch it if you want to learn more

  2. https://github.com/blasty/CVE-2021-3156 The exploit we’ll use in this video.

    Other stuff you might find useful

  3. https://www.qualys.com/2021/01/26/cve-2021-3156/baron-samedit-heap-based-overflow-sudo.txt

  4. https://datafarm-cybersecurity.medium.com/exploit-writeup-for-cve-2021-3156-sudo-baron-samedit-7a9a4282cb31

]]>
https://liqimore.com/2021/ece9609-hacking-companion-notes2/feed/ 0