参考文档:

https://v3-1.docs.kubesphere.io/zh/docs/reference/storage-system-installation/glusterfs-server/

https://v3-1.docs.kubesphere.io/zh/docs/installing-on-linux/persistent-storage-configurations/install-glusterfs/

https://blog.csdn.net/qq_40907977/article/details/108776021

安装的主要参考文档:

https://www.cnblogs.com/lingfenglian/p/11731849.html

GlusterFs卷的简单操作:

https://www.cnblogs.com/zhangb8042/p/7801205.html

GlusterFs架构(分布式卷、复制卷、分布式复制卷等):

https://blog.csdn.net/wh211212/article/details/79412081

heketi 的安装:

https://www.cnblogs.com/netonline/p/10288219.html

https://cloud.tencent.com/developer/article/1602926

https://blog.csdn.net/qq_40757662/article/details/113200098

最终安装 GlusterFS 和 heketi 版本为:

 1[root@glusterfs-node1 ~]# yum list | grep glusterfs
 2glusterfs.x86_64                            9.5-1.el7                  @centos-gluster9
 3glusterfs-cli.x86_64                        9.5-1.el7                  @centos-gluster9
 4glusterfs-client-xlators.x86_64             9.5-1.el7                  @centos-gluster9
 5glusterfs-fuse.x86_64                       9.5-1.el7                  @centos-gluster9
 6
 7[root@glusterfs-node1 ~]# gluster --version
 8glusterfs 9.5
 9Repository revision: git://git.gluster.org/glusterfs.git
10Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
11GlusterFS comes with ABSOLUTELY NO WARRANTY.
12It is licensed to you under your choice of the GNU Lesser
13General Public License, version 3 or any later version (LGPLv3
14or later), or the GNU General Public License, version 2 (GPLv2),
15in all cases as published by the Free Software Foundation.
16
17[root@glusterfs-node1 ~]# heketi --version
18Heketi 9.0.0

以下示例以三台centos7服务器为例:

主机名 IP 角色

glusterfs-node1 192.168.128.11 GlusterFS,Heketi

glusterfs-node2 192.168.128.12 GlusterFS

glusterfs-node3 192.168.128.13 GlusterFS

说明:

Heketi 是一个提供RESTful API管理GlusterFS卷的框架,便于管理员对GlusterFS进行操作。

Heketi 可以安装在任意服务器,为了方便,直接安装在 glusterfs-node1 上

设置主机名与时区(三台机器):

 1# 设置时区
 2timedatectl set-timezone Asia/Shanghai
 3
 4# glusterfs-node1 执行
 5hostnamectl set-hostname glusterfs-node1
 6
 7# glusterfs-node2 执行
 8hostnamectl set-hostname glusterfs-node2
 9
10# glusterfs-node3 执行
11hostnamectl set-hostname glusterfs-node3

网络设置(三台机器):

设置静态ip (可选设置,但是建议设置,这样避免路由器分配ip的不确定):

 1vim /etc/sysconfig/network-scripts/ifcfg-ens33
 2# 删除:
 3BOOTPROTO=dhcp
 4# 添加:
 5BOOTPROTO=static
 6# 根据实际 ip 设置,例如:glusterfs-node1 为 192.168.128.11
 7
 8IPADDR=192.168.128.11
 9NETMASK=255.255.255.0
10GATEWAY=192.168.128.2
11DNS1=114.114.114.114
12DNS2=8.8.8.8

设置hosts:

1cat >> /etc/hosts <<EOF
2192.168.128.11 glusterfs-node1
3192.168.128.12 glusterfs-node2
4192.168.128.13 glusterfs-node3
5EOF

关闭防火墙、seLinux(三台机器):

1sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
2
3systemctl disable firewalld

修改ssh配置(三台机器):

1vim /etc/ssh/sshd_config
2# 修改如下内容:
3PermitRootLogin yes
4PasswordAuthentication yes

配置无密码 SSH 登录(glusterfs-node1上操作):

1# 通过运行以下命令在 glusterfs-node1 上创建密钥直接按回车键跳过所有提示
2
3ssh-keygen
4
5# 将密钥复制到所有 GlusterFS 节点
6
7ssh-copy-id root@glusterfs-node1
8ssh-copy-id root@glusterfs-node2
9ssh-copy-id root@glusterfs-node3

重启三台服务器,使上面的设置生效

如果之前安装过glusterfs,先要将之前的配置删除(尤其是下面的第二步操作)。

具体请参考:https://www.ibm.com/docs/zh/cloud-paks/cp-management/1.1.0?topic=SSFC4F_1.1.0/manage_cluster/uninstall_gluster.html

步骤如下:

  1. 删除 Helm Chart。
1helm delete --purge  <release_name> --tls

注:Helm delete 命令会删除除 heketi.backupDbSecret 以外的所有对象。

如果您无需 heketi.backupDbSecret 对象,那么必须手动将其删除。

  1. 从每个存储节点中移除 Heketi 和 Gluster 守护程序配置目录。
1rm -rf /var/lib/heketi
2rm -rf /var/lib/glusterd
3rm -rf /var/log/glusterfs
  1. 禁用 GlusterFS。在 <installation_directory>/cluster/config.yaml 文件的 management_services 部分下设置 storage-glusterfs: disabled。
1management_services:
2  istio: disabled
3  vulnerability-advisor: disabled
4  storage-glusterfs: disabled
5  storage-minio: disabled

安装glusterfs(三台机器):

1# 安装glusterfs
2yum install -y centos-release-gluster
3yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
4
5# 启动gluster服务
6systemctl start glusterd.service
7systemctl enable glusterd.service

初始化集群(glusterfs-node1上操作):

在glusterfs-node1上执行,将glusterfs-node2、glusterfs-node3加入集群

 1# 将 glusterfs-node2 加入集群
 2[root@glusterfs-node1 ~]# gluster peer probe glusterfs-node2
 3peer probe: success
 4
 5# 将 glusterfs-node3 加入集群
 6[root@glusterfs-node1 ~]# gluster peer probe glusterfs-node3
 7peer probe: success
 8
 9# 查看集群状态
10[root@glusterfs-node1 ~]# gluster peer status
11Number of Peers: 2
12
13Hostname: glusterfs-node2
14Uuid: f159de69-fe1d-4f28-a33a-59632e12351a
15State: Peer in Cluster (Connected)
16
17Hostname: glusterfs-node3
18Uuid: 3fa6cb30-909d-40cc-8aa8-19d4028434f5
19State: Peer in Cluster (Connected)

创建数据存储目录(三台机器):

1mkdir -p /data/gluster-data

gluster 的初始化操作(在glusterfs-node1上执行)

注意:如果是在kubernetes中用Glusterfs做持久化存储,只要保证glusterd服务已正常启动,这一步不是必须的。

 1# 创建名称为 gluster-data 的卷,该卷一共三个副本,使用现有Linux文件系统创建glusterfs文件系统
 2[root@glusterfs-node1 ~]# gluster volume create gluster-data replica 3 transport tcp glusterfs-node1:/data/gluster-data glusterfs-node2:/data/gluster-data glusterfs-node3:/data/gluster-data force 
 3volume create: gluster-data: success: please start the volume to access data
 4加上 force 参数是因为:
 5gluster 的默认情况下是不允许创建的brick在系统盘。事实上,生产环境下也尽可能的与系统盘分开。
 6如果是测试、或者没有其他盘等情况下需要使用系统盘,请加上 force 参数
 7
 8# 列出卷
 9[root@glusterfs-node1 ~]# gluster volume list
10gluster-data
11
12
13
14# 查看卷信息
15[root@glusterfs-node1 ~]# gluster volume info
16 
17Volume Name: gluster-data
18Type: Replicate
19Volume ID: 838b87a0-2c16-441d-9545-30b90b7a157f
20Status: Created
21Snapshot Count: 0
22Number of Bricks: 1 x 3 = 3
23Transport-type: tcp
24Bricks:
25Brick1: glusterfs-node1:/data/gluster-data
26Brick2: glusterfs-node2:/data/gluster-data
27Brick3: glusterfs-node3:/data/gluster-data
28Options Reconfigured:
29cluster.granular-entry-heal: on
30storage.fips-mode-rchecksum: on
31transport.address-family: inet
32nfs.disable: on
33performance.client-io-threads: off
34
35# 启动名为 gluster-data 的卷
36[root@glusterfs-node1 ~]# gluster volume start gluster-data
37volume start: app-data: success
38
39# 再次查看卷的信息,Status 已经由之前的 Created 变成 Started,这样,gluster创建的卷才能正常使用。
40[root@glusterfs-node1 ~]# gluster volume info
41 
42Volume Name: gluster-data
43Type: Replicate
44Volume ID: 838b87a0-2c16-441d-9545-30b90b7a157f
45Status: Started
46Snapshot Count: 0
47Number of Bricks: 1 x 3 = 3
48Transport-type: tcp
49Bricks:
50Brick1: glusterfs-node1:/data/gluster-data
51Brick2: glusterfs-node2:/data/gluster-data
52Brick3: glusterfs-node3:/data/gluster-data
53Options Reconfigured:
54performance.client-io-threads: off
55nfs.disable: on
56transport.address-family: inet
57storage.fips-mode-rchecksum: on
58cluster.granular-entry-heal: on
59features.quota: off
60features.inode-quota: off
61
62
63# 优化:开启卷配额(可选操作)
64
65[root@glusterfs-node1 ~]# gluster volume quota gluster-data enable 
66volume quota : success
67
68# 优化:设定卷配额(可选操作)
69[root@glusterfs-node1 ~]# gluster volume quota gluster-data limit-usage / 10GB
70volume quota : success
71
72# 查看卷状态
73[root@glusterfs-node1 ~]# gluster volume status
74Status of volume: gluster-data
75Gluster process                                       TCP Port  RDMA Port  Online  Pid
76----------------------------------------------------------------------------------------
77Brick glusterfs-node1:/data/gluster-data              49152     0          Y       11762
78Brick glusterfs-node2:/data/gluster-data              49152     0          Y       10573
79Brick glusterfs-node3:/data/gluster-data              49152     0          Y       10531
80Self-heal Daemon on localhost                         N/A       N/A        Y       11779
81Self-heal Daemon on node2                             N/A       N/A        Y       10590
82Self-heal Daemon on node3                             N/A       N/A        Y       10548
83 
84Task Status of Volume gluster-data
85-----------------------------------------------------------------------------------------
86There are no active volume tasks

GlusterFS客户端操作

1#安装依赖
2[root@localhost ~]# yum install -y glusterfs glusterfs-fuse
3
4# 把集群的 gluster-data 卷挂载到本地的 /mnt 目录(如果是客户端位于其他机器,注意先配置hosts)
5# 1. 临时挂载
6[root@localhost ~]# mount -t glusterfs node1:gluster-data /mnt
7# 2. 开机挂载
8
9[root@localhost ~]# echo "node1:app-data /mnt glusterfs defaults 0 0" >> /etc/fstab

挂载到本地后,就可以像操作本地文件一样操作gluster集群了。

GlusterFS客户端常用命令

 1# 添加节点
 2gluster peer probe
 3
 4 # 移除节点
 5gluster peer detach
 6
 7# 创建卷
 8gluster volume create
 9
10# 启动卷
11gluster volume start $VOLUME_NAME
12
13# 停止卷
14gluster volume stop $VOLUME_NAME
15
16# 删除卷
17# 注意:删除卷的前提是先停止卷
18gluster volume delete $VOlUME_NAME
19
20# 开启卷配额 
21gluster volume quota $VOlUME_NAME enable
22
23# 关闭卷配额
24gluster volume quota $VOlUME_NAME disable
25
26# 设定卷配额
27gluster volume quota $VOlUME_NAME limit-usage / $SIZE
28例如:gluster volume quota gv0 limit-usage / 10GB

安装 Heketi

GlusterFS 本身不提供 API 调用的方法。安装 Heketi,通过用于 Kubernetes 调用的 RESTful API 来管理 GlusterFS 存储卷的生命周期。

这样,Kubernetes 集群就可以动态地配置 GlusterFS 存储卷。

二进制安装包的下载地址:https://github.com/heketi/heketi/releases,选择heketi-{version}.linux.amd64.tar.gz即可,安装请参考最上面的文档。

我们使用 rpm 安装,这样更方便

说明:Heketi 可选择任意服务器进行安装(为了方便,直接安装在node1上)

 1# 安装 gluster 源(node1之前已经安装,可不用再次安装)
 2[root@glusterfs-node1 ~]# yum install -y centos-release-gluster
 3
 4# 安装 Heketi
 5[root@glusterfs-node1 ~]# yum install -y heketi heketi-client
 6
 7# 配置 heketi
 8[root@glusterfs-node1 ~]# cp /etc/heketi/heketi.json /etc/heketi/heketi.json.backup
 9[root@glusterfs-node1 ~]# vim /etc/heketi/heketi.json
10# 需要修改内容如下:
11
12{
13  # 端口,根据实际情况设置
14  "port": "8080",
15
16  # 是否需要认证(默认为false;为true时,使用heketi客户端时,需要携带下面的用户名和密码,例如:heketi-cli --user admin --secret 123456 cluster list)
17  "use_auth": true,
18  
19  # GlusterFS 作为 KubeSphere 集群的存储类型时,必须提供帐户 admin 及其 Secret 值
20  "jwt": {
21    "admin": {
22      "key": "123456"
23    },
24    "user": {
25      "key": "123456"
26    }
27  },
28
29  "glusterfs": {
30
31    # Heketi 通过 ssh 访问集群节点
32    "executor": "ssh",
33    "sshexec": {
34      "keyfile": "/etc/heketi/heketi_key",
35      "user": "root",
36      "port": "22",
37      "fstab": "/etc/fstab"
38    },
39
40    # 日志输出级别(debug、warning);日志信息输出在 /var/log/message
41    "loglevel" : "warning"
42  }
43}

设置 heketi 免密访问 GlusterFS

生成秘钥:

1[root@glusterfs-node1 ~]# ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ""

说明:

-t:秘钥类型;

-q:安静模式;

-f:指定生成秘钥的目录与名字,注意与heketi.json的ssh执行器中"keyfile"值一致;

-N:秘钥密码,““即为空

heketi服务由heketi用户启动,heketi用户需要有新生成key的读赋权,否则服务无法启动

1[root@glusterfs-node1 ~]# chown heketi:heketi /etc/heketi/heketi_key

分发公钥到 GlusterFS 集群的所有服务器 (-i 参数表示指定公钥)

1[root@glusterfs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@glusterfs-node1
2[root@glusterfs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@glusterfs-node2
3[root@glusterfs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@glusterfs-node3

启动 heketi

1[root@glusterfs-node1 ~]# systemctl enable heketi
2[root@glusterfs-node1 ~]# systemctl restart heketi
3[root@glusterfs-node1 ~]# systemctl status heketi

如果服务没有正常启动,报如下错误:

Error: unknown shorthand flag: ‘c’ in -config=/etc/heketi/heketi.json

因为通过yum安装heketi,默认的systemd文件有1处错误。

修改 /usr/lib/systemd/system/heketi.service 文件,把 -config=/etc/heketi/heketi.json 改为 –config=/etc/heketi/heketi.json

通过配置 topology 组建的 GlusterFS 集群

  1. 定义topology.json文件
 1
 2[root@glusterfs-node1 ~]# vim /etc/heketi/topology.json
 3{
 4  "clusters": [
 5    {
 6      "nodes": [
 7        {
 8          "node": {
 9            "hostnames": {
10              "manage": [
11                "192.168.128.11"
12              ],
13              "storage": [
14                "192.168.128.11"
15              ]
16            },
17            "zone": 1
18          },
19          "devices": [
20            "/dev/sdb"
21          ]
22        },
23        {
24          "node": {
25            "hostnames": {
26              "manage": [
27                "192.168.128.12"
28              ],
29              "storage": [
30                "192.168.128.12"
31              ]
32            },
33            "zone": 1
34          },
35          "devices": [
36            "/dev/sdb"
37          ]
38        },
39        {
40          "node": {
41            "hostnames": {
42              "manage": [
43                "192.168.128.13"
44              ],
45              "storage": [
46                "192.168.128.13"
47              ]
48            },
49            "zone": 1
50          },
51          "devices": [
52            "/dev/sdb"
53          ]
54        }
55      ]
56    }
57  ]
58}

说明:

node/hostnames 字段的manage填写主机ip,指管理通道,在heketi服务器不能通过hostname访问GlusterFS节点时不能填写hostname;

node/hostnames 字段的storage填写主机ip,指存储数据通道,与manage可以不一样;

node/zone 字段指定了node所处的故障域,heketi通过跨故障域创建副本,提高数据高可用性质,如可以通过rack的不同区分zone值,创建跨机架的故障域;

devices 字段指定GlusterFS各节点的盘符(可以是多块盘),必须是未创建文件系统的裸设备

  1. 组件集群,创建存储空间
 1[root@glusterfs-node1 ~]# heketi-cli --server http://192.168.128.11:8080 --user admin --secret 123456 topology load --json=/etc/heketi/topology.json
 2Creating cluster ... ID: c46ab854ec4344b7c25c5d8aeac03beb
 3        Allowing file volumes on cluster.
 4        Allowing block volumes on cluster.
 5        Creating node 192.168.128.11 ... ID: 17dadc920cddbc307e82ee06f7a31f9b
 6                Adding device /dev/sdb ... OK
 7        Creating node 192.168.128.12 ... ID: 64ddc31823702a198c0ffebc455c25ae
 8                Adding device /dev/sdb ... OK
 9        Creating node 192.168.128.13 ... ID: 1d0553f65ef6900eedfd0fe24c8a0919
10                Adding device /dev/sdb ... OK
  1. 查看集群信息

注意:由于之前在 heketi.json 中 use_auth 设置为 true,所以在操作时,都要带上认证信息

 1# 查看集群列表
 2[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 cluster list
 3Clusters:
 4Id:c46ab854ec4344b7c25c5d8aeac03beb [file][block]
 5
 6
 7
 8# 查看集群详细信息
 9[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 cluster info c46ab854ec4344b7c25c5d8aeac03beb
10Cluster id: c46ab854ec4344b7c25c5d8aeac03beb
11Nodes:
1217dadc920cddbc307e82ee06f7a31f9b
131d0553f65ef6900eedfd0fe24c8a0919
1464ddc31823702a198c0ffebc455c25ae
15Volumes:
16
17Block: true
18
19File: true
20
21
22
23# 查看节点信息
24[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 node list
25Id:17dadc920cddbc307e82ee06f7a31f9b     Cluster:c46ab854ec4344b7c25c5d8aeac03beb
26Id:1d0553f65ef6900eedfd0fe24c8a0919     Cluster:c46ab854ec4344b7c25c5d8aeac03beb
27Id:64ddc31823702a198c0ffebc455c25ae     Cluster:c46ab854ec4344b7c25c5d8aeac03beb
28
29
30
31# 查看节点详细信息
32[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 node info 17dadc920cddbc307e82ee06f7a31f9b
33Node Id: 17dadc920cddbc307e82ee06f7a31f9b
34State: online
35Cluster Id: c46ab854ec4344b7c25c5d8aeac03beb
36Zone: 1
37Management Hostname: 192.168.128.11
38Storage Hostname: 192.168.128.11
39Devices:
40Id:847634fe87fbe9b8a432f4704f4b82a0   Name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      Bricks:0       
41
42
43
44
45# 查看device相关信息
46[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 device info 847634fe87fbe9b8a432f4704f4b82a0
47Device Id: 847634fe87fbe9b8a432f4704f4b82a0
48Name: /dev/sdb
49State: online
50Size (GiB): 19
51Used (GiB): 0
52Free (GiB): 19
53Bricks:
54
55
56
57
58# 查看通过 topology 创建集群的详情
59[root@node1 ~]# heketi-cli --user admin --secret 123456 topology info
60
61Cluster Id: c46ab854ec4344b7c25c5d8aeac03beb
62
63    File:  true
64    Block: true
65
66    Volumes:
67
68
69    Nodes:
70
71        Node Id: 17dadc920cddbc307e82ee06f7a31f9b
72        State: online
73        Cluster Id: c46ab854ec4344b7c25c5d8aeac03beb
74        Zone: 1
75        Management Hostnames: 192.168.128.11
76        Storage Hostnames: 192.168.128.11
77        Devices:
78                Id:847634fe87fbe9b8a432f4704f4b82a0   Name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
79                        Bricks:
80
81        Node Id: 1d0553f65ef6900eedfd0fe24c8a0919
82        State: online
83        Cluster Id: c46ab854ec4344b7c25c5d8aeac03beb
84        Zone: 1
85        Management Hostnames: 192.168.128.13
86        Storage Hostnames: 192.168.128.13
87        Devices:
88                Id:0360cae5417d2d367f033b72d6368eaa   Name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
89                        Bricks:
90
91        Node Id: 64ddc31823702a198c0ffebc455c25ae
92        State: online
93        Cluster Id: c46ab854ec4344b7c25c5d8aeac03beb
94        Zone: 1
95        Management Hostnames: 192.168.128.12
96        Storage Hostnames: 192.168.128.12
97        Devices:
98                Id:70b5c7e1f8d8a9d01a9935e1a82b5d70   Name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
99                        Bricks:

创建volume测试

 1# 创建一个2G的磁盘,副本数为3(更多用法可以用 heketi-cli volume create -h 查看)
 2[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 volume create --size=2 --replica=3
 3Name: vol_54bf2b378c2d0a176b74d16b791fba84
 4Size: 2
 5Volume Id: 54bf2b378c2d0a176b74d16b791fba84
 6Cluster Id: c46ab854ec4344b7c25c5d8aeac03beb
 7Mount: 192.168.128.11:vol_54bf2b378c2d0a176b74d16b791fba84
 8Mount Options: backup-volfile-servers=192.168.128.13,192.168.128.12
 9Block: false
10Free Size: 0
11Reserved Size: 0
12Block Hosting Restriction: (none)
13Block Volumes: []
14Durability Type: replicate
15Distribute Count: 1
16Replica Count: 3
17
18
19
20# 查看存储卷
21[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 volume list
22Id:54bf2b378c2d0a176b74d16b791fba84    Cluster:c46ab854ec4344b7c25c5d8aeac03beb    Name:vol_54bf2b378c2d0a176b74d16b791fba84
23
24
25# 查看卷的信息
26[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 volume info 54bf2b378c2d0a176b74d16b791fba84
27Name: vol_54bf2b378c2d0a176b74d16b791fba84
28Size: 2
29Volume Id: 54bf2b378c2d0a176b74d16b791fba84
30Cluster Id: c46ab854ec4344b7c25c5d8aeac03beb
31Mount: 192.168.128.11:vol_54bf2b378c2d0a176b74d16b791fba84
32Mount Options: backup-volfile-servers=192.168.128.13,192.168.128.12
33Block: false
34Free Size: 0
35Reserved Size: 0
36Block Hosting Restriction: (none)
37Block Volumes: []
38Durability Type: replicate
39Distribute Count: 1
40Replica Count: 3
41
42
43# 挂载
44[root@glusterfs-node1 ~]# mount -t glusterfs 192.168.128.11:vol_54bf2b378c2d0a176b74d16b791fba84 /mnt 
45
46
47# 删除卷
48[root@glusterfs-node1 ~]# heketi-cli --user admin --secret 123456 volume delete 54bf2b378c2d0a176b74d16b791fba84
49Volume 54bf2b378c2d0a176b74d16b791fba84 deleted