Skip to content

GlusterFS Repository for Learning and Deploy Cluster

Notifications You must be signed in to change notification settings

amirajoodani/GlusterFS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 

Repository files navigation

gluster-ant

What is GlusterFS ?

  • Gluster is a free and open source software scalable network filesystem.
  • it use Disk --> Brick --> Volume to serve client data
  • we can config it in two ways :
    1- Replicated HA
    2- Distributed Scale
  • in Replication HA Method , if you Write 1G Data to DB , Based on your Replicaset , for example 3 , You have 3 instabce from your data and of course your data occupied 3G on your servers . now if one of your server goes down , you can retrive your data from other nodes .
  • now if you have one big file , for example 150 GB , and your disks on linux server have just 100 GB per nodes , you can not write yor data file into your replicated cluster and you should use DISTRIBUTED mode . in this mode , for example if you have 3 server with 100 GB stroage for glusterfs , now you have totally 300GB for storing data . but in Replicated Method you just have 100GB Storage . BUT if one of your server goes down in distributed method , you have lost you data .
  • we can use two protocol to communicate between client and cluster . one of them is nfs and another is Gluster Protocol . gluster

DIAGRAM :

gluster (1)
every server has two networks :
1- for public access
2- for disk replication
server gl1 :
gl1
server gl2 :
gl2
server gl3:
gl3
now we should use lvm to create lvm partions fore sdb and sdc on all servers .

# pvs
# pvcreate /dev/sdb
# vgs
# vgcreate glustervg /dev/sdb
# lvs
# lvcreate -l 100%VG -n glusterlv glustervg
# lvs
# lsblk -l
# mkdir /glustervolume
# vi /etc/fstab
/dev/glustervg/glusterlv /glustervolume ext4 deafults 0 0
# mount -a

now install glusterfs :

# apt update
# apt -y install glusterfs-server
# systemctl status glusterd
# systemctl enable glusterd
# systemctl start glusterd

no we should create storage pool . all nodes should see each other and can ping them with name

on all nodes :

#vi /etc/hosts
192.168.1.201 gl1
192.168.1.202 gl2
192.168.1.203 gl3

just on first node :

(node1)
# gluster peer probe gl2
# gluster peer probe gl3
# gluster peer status
# gluster pool list

now we create volume (brick)(on all nodes):

# mkdir glustervolume/vol1

on one node :

# gluster volume create vol1 replica 3 gl1:/glustervolume/vol1 gl2:/glustervolume/vol1 gl3:/glustervolume/vol1
# gluster volume start vol1
# gluster volume status vol1

on client node :

# apt install glusterfs-client -y

update /etc/hosts on clients :

#vi /etc/hosts
192.168.1.201 gl1
192.168.1.202 gl2
192.168.1.203 gl3

create directory on client :

# mkdir /mnt/vol1
# echo "gl3:/vol1 /mnt/vol1 glusterfs defaults 0 0 " >> /etc/fstab
# mount -a

note : this three server should see each other without firewalls .

how to Delete Volume ?

1- umount disk from clients

# umount /mnt/vol1

2- stop volume :

# gluster volume stop vol1

your file exists now and you can delete all file if you want . if you want to delete files , you should delete from all 3 servers .

3- comment mount option in /etc/fstab clients .

How to Create distributed volume ?

# gluster volume create vol2 gl1:/glustervolume/vol2 gl2:/glustervolume/vol2 gl3:/glustervolume/vol2
# gluster volume start vol2
# gluster volume info 

How to set Qouta on volume ?

# gluster volume quota vol_distributed enable
# gluster volume quota vol_distributed limit-usage / 1GB
# gluster volume quota vol_distributed list
# df -hP .

How to set qouta on directory ?

# gluster volume quota vol_distributed limit-usage /dir01 1GB 70

How to extend volume ?

# gluster volume create vol2 gl1:/glustervolume/vol2
# gluster volume start vol2
# gluster volume add-brick vol2 gl2:/glustervolume/vol2
# gluster volume rebalance vol2 fix-layout start

About

GlusterFS Repository for Learning and Deploy Cluster

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published