Discussion:
Add a remote disk to LVM
Mahmood Naderan
2014-05-07 05:12:41 UTC
Permalink
Hello,
Is it possible to add a network
drive to existing LVM? I have created a group and have added three
local drives. Now I want to add a remote disk from another node. The
remote node has an additional hard drive and is mounted to /arch (remote
node)

Is that possible? How? All examples I see are trying to add extra local drives and not remote drives.

Here are some info

# vgdisplay
  --- Volume group ---
  VG Name               tigerfiler1
  System ID
 
Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max
PV                0
  Cur PV                3
  Act PV                3
  VG Size               2.73 TiB
  PE Size               4.00 MiB
  Total PE              715401
  Alloc PE / Size       715401 / 2.73 TiB
  Free  PE / Size       0 / 0
  VG
UUID               8Ef8Vj-bDc7-H4ia-D3X4-cDpY-kE9Z-njc8lj



pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
 
Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV UUID               FmC77z-9UaR-FhYa-ONHZ-EazF-5Hm2-8zmUuj

  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV UUID               1jBQUn-gkkD-37I3-R3nL-KeHA-Hn2A-4zgNcR

  --- Physical volume ---
  PV Name               /dev/sdd
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV
UUID               mxi8jW-O868-iPse-IfY7-ag3m-R3vZ-gS3Jdx

 


 
 
Regards,
Mahmood
John Lauro
2014-05-07 05:28:55 UTC
Permalink
What type of remote disk? NFS?

A more common case would be to move some directories to /arch and use sym links.
You could create a loopback diskfile somewhere on /arch and add that to LVM. It's going to make bootup messy, so you wouldn't want any volumes on it that are required for bootup (especially / or /usr or /sbin, and probably not /var) ...

----- Original Message -----
Sent: Wednesday, May 7, 2014 1:12:41 AM
Subject: Add a remote disk to LVM
Hello,
Is it possible to add a network drive to existing LVM? I have created
a group and have added three local drives. Now I want to add a
remote disk from another node. The remote node has an additional
hard drive and is mounted to /arch (remote node)
Is that possible? How? All examples I see are trying to add extra
local drives and not remote drives.
Here are some info
# vgdisplay
--- Volume group ---
VG Name tigerfiler1
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 3
Act PV 3
VG Size 2.73 TiB
PE Size 4.00 MiB
Total PE 715401
Alloc PE / Size 715401 / 2.73 TiB
Free PE / Size 0 / 0
VG UUID 8Ef8Vj-bDc7-H4ia-D3X4-cDpY-kE9Z-njc8lj
pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name tigerfiler1
PV Size 931.51 GiB / not usable 1.71 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238467
Free PE 0
Allocated PE 238467
PV UUID FmC77z-9UaR-FhYa-ONHZ-EazF-5Hm2-8zmUuj
--- Physical volume ---
PV Name /dev/sdc
VG Name tigerfiler1
PV Size 931.51 GiB / not usable 1.71 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238467
Free PE 0
Allocated PE 238467
PV UUID 1jBQUn-gkkD-37I3-R3nL-KeHA-Hn2A-4zgNcR
--- Physical volume ---
PV Name /dev/sdd
VG Name tigerfiler1
PV Size 931.51 GiB / not usable 1.71 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238467
Free PE 0
Allocated PE 238467
PV UUID mxi8jW-O868-iPse-IfY7-ag3m-R3vZ-gS3Jdx
Regards,
Mahmood
Mahmood Naderan
2014-05-07 07:13:19 UTC
Permalink
No it is not shared via NFS (do we have to first share it?)
Problem is, there is no free slot for disks in our machine (say N1). However another node (say N2 which is running scientific linux independently) has free slots. So I added the physical disks to N2. the disk has been formatted and it has a mount point on N2.

Now I want to add N2:/dev/sdb to N1:/dev/tigerfiler1/tigervolume
Can you please guide step by step?

 
Regards,
Mahmood
On Wednesday, May 7, 2014 10:00 AM, John Lauro <***@covenanteyes.com> wrote:

What type of remote disk?   NFS?

A more common case would be to move some directories to /arch and use sym links.
You could create a loopback diskfile somewhere on /arch and add that to LVM.  It's going to make bootup messy, so you wouldn't want any volumes on it that are required for bootup (especially / or /usr or /sbin, and probably not /var) ...

________________________________
Sent: Wednesday, May 7, 2014 1:12:41 AM
Subject: Add a remote disk to LVM
Hello,
Is it possible to add a network
drive to existing LVM? I have created a group and have added three
local drives. Now I want to add a remote disk from another node. The
remote node has an additional hard drive and is mounted to /arch (remote
node)
Is that possible? How? All examples I see are trying to add extra local drives and not remote drives.
Here are some info
# vgdisplay
  --- Volume group ---
  VG Name               tigerfiler1
  System ID
 
Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max
PV                0
  Cur PV                3
  Act PV                3
  VG Size               2.73 TiB
  PE Size               4.00 MiB
  Total PE              715401
  Alloc PE / Size       715401 / 2.73 TiB
  Free  PE / Size       0 / 0
  VG
UUID               8Ef8Vj-bDc7-H4ia-D3X4-cDpY-kE9Z-njc8lj
pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
 
Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV UUID               FmC77z-9UaR-FhYa-ONHZ-EazF-5Hm2-8zmUuj
  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV UUID               1jBQUn-gkkD-37I3-R3nL-KeHA-Hn2A-4zgNcR
  --- Physical volume ---
  PV Name               /dev/sdd
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV
UUID               mxi8jW-O868-iPse-IfY7-ag3m-R3vZ-gS3Jdx
 
 
 
Regards,
Mahmood
David Sommerseth
2014-05-07 11:43:38 UTC
Permalink
Post by Mahmood Naderan
No it is not shared via NFS (do we have to first share it?)
Problem is, there is no free slot for disks in our machine (say N1).
However another node (say N2 which is running scientific linux
independently) has free slots. So I added the physical disks to N2. the
disk has been formatted and it has a mount point on N2.
Now I want to add N2:/dev/sdb to N1:/dev/tigerfiler1/tigervolume
Can you please guide step by step?
<DISCLAIMER>
Not testet, but this is the theory behind it. I take not responsibility
in potential data loss.
</DISCLAIMER>

I have no idea how N2:/dev/sdb differs from N1:/dev/sdb. You need
access to a device file with your new harddrive. I'm using /dev/sde
here in the example, to cover the LVM basics:

# Make the new drive a LVM physical volume
pvcreate /dev/sde

# Extend the tigerfiler1 volume group with the new drive
vgextend tigerfiler1 /dev/sde

Now you can use

fsadm resize -l /dev/tigerfiler1/tigervolume $NEWSIZE

That's the LVM theory.

What is confusing here is: "So I added the physical disks to N2. the
disk has been formatted and it has a mount point on N2." ... You cannot
use mountpoints for LVM physical volumes. If you need to do this over a
network, you need to configure iSCSI, which will give you another
/dev/sdX device when properly setup. This /dev/sdX device can then be
used as an LVM physical volume.

If considering iSCSI (you need tgtd on the iSCSI "server" (target) and
iscsi-utils on the "client" (initiator)), I would strongly recommend
using a separate network interface for the iSCSI traffic - preferably
back-to-back, to not get too bad network performance.

--
kind regards,

David Sommerseth
Post by Mahmood Naderan
------------------------------------------------------------------------
*Sent: *Wednesday, May 7, 2014 1:12:41 AM
*Subject: *Add a remote disk to LVM
Hello,
Is it possible to add a network drive to existing LVM? I have
created a group and have added three local drives. Now I want to add
a remote disk from another node. The remote node has an additional
hard drive and is mounted to /arch (remote node)
Is that possible? How? All examples I see are trying to add extra
local drives and not remote drives.
Here are some info
# vgdisplay
--- Volume group ---
VG Name tigerfiler1
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 3
Act PV 3
VG Size 2.73 TiB
PE Size 4.00 MiB
Total PE 715401
Alloc PE / Size 715401 / 2.73 TiB
Free PE / Size 0 / 0
VG UUID 8Ef8Vj-bDc7-H4ia-D3X4-cDpY-kE9Z-njc8lj
pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name tigerfiler1
PV Size 931.51 GiB / not usable 1.71 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238467
Free PE 0
Allocated PE 238467
PV UUID FmC77z-9UaR-FhYa-ONHZ-EazF-5Hm2-8zmUuj
--- Physical volume ---
PV Name /dev/sdc
VG Name tigerfiler1
PV Size 931.51 GiB / not usable 1.71 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238467
Free PE 0
Allocated PE 238467
PV UUID 1jBQUn-gkkD-37I3-R3nL-KeHA-Hn2A-4zgNcR
--- Physical volume ---
PV Name /dev/sdd
VG Name tigerfiler1
PV Size 931.51 GiB / not usable 1.71 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238467
Free PE 0
Allocated PE 238467
PV UUID mxi8jW-O868-iPse-IfY7-ag3m-R3vZ-gS3Jdx
Regards,
Mahmood
Lamar Owen
2014-05-07 14:07:49 UTC
Permalink
Is it possible to add a network drive to existing LVM? I have created
a group and have added three local drives. Now I want to add a remote
disk from another node. The remote node has an additional hard drive
and is mounted to /arch (remote node)
Is that possible? How? All examples I see are trying to add extra
local drives and not remote drives.
Hmmm, yes, it is possible, using iSCSI.

In order to do this correctly you would want the machine, that has the
additional drives, set up to be an iSCSI target, and then use the iSCSI
initiator on the first machine to attach to the disks; you can then add
those iSCSI disks to the volume group. I would use dedicated Gigabit
Ethernet NICs and point-to-point connections rather than trying to use
the existing ethernet ports, too. Oh, and you wouldn't have it mounted
on the second machine as such.

No, I can't give you a step-by-step, you'll have to do a bit of research
and you really really need to read up on and understand what iSCSI
brings to the party. I have an IA-64 box (SGI Altix) running CentOS 5.9
(my own hand-rebuild, bootstrapped up from SL CERN 5.4 IA-64) using an
EMC Clariion array's LUNs over iSCSI, so I have a bit of experience with
the initiator portion of the equation, but none at all with the target
portion, but I know that it does exist.
Mahmood Naderan
2014-05-07 15:04:12 UTC
Permalink
Post by Lamar Owen
Hmmm, yes, it is possible, using iSCSI.
OK I will try that. Thanks


 
Regards,
Mahmood
Post by Lamar Owen
Is it possible to add a network drive to existing LVM? I have created
a group and have added three local drives. Now I want to add a remote
disk from another node. The remote node has an additional hard drive
and is mounted to /arch (remote node)
Is that possible? How? All examples I see are trying to add extra
local drives and not remote drives.
Hmmm, yes, it is possible, using iSCSI.

In order to do this correctly you would want the machine, that has the
additional drives, set up to be an iSCSI target, and then use the iSCSI
initiator on the first machine to attach to the disks; you can then add
those iSCSI disks to the volume group.  I would use dedicated Gigabit
Ethernet NICs and point-to-point connections rather than trying to use
the existing ethernet ports, too.  Oh, and you wouldn't have it mounted
on the second machine as such.

No, I can't give you a step-by-step, you'll have to do a bit of research
and you really really need to read up on and understand what iSCSI
brings to the party.  I have an IA-64 box (SGI Altix) running CentOS 5.9
(my own hand-rebuild, bootstrapped up from SL CERN 5.4 IA-64) using an
EMC Clariion array's LUNs over iSCSI, so I have a bit of experience with
the initiator portion of the equation, but none at all with the target
portion, but I know that it does exist.
David Sommerseth
2014-05-08 10:45:48 UTC
Permalink
** Terminology **

- iSCSI target
The iSCSI server, it provides SCSI targets over the network. In this
guide, I'll use 192.168.1.2 as the server IP address.

- iSCSI initiator
The iSCSI client, it initiates a connection to a SCSI
target over the network. I'll use the term "iSCSI client" instead,
which is not strictly correct, but hopefully more understandable. In
this guide I'll use 192.168.1.10 as the client IP address.

- IQN ids
A unique ID which represents a host providing or using an iSCSI target
It's built up by 4 "blocks" of information.

Example initiator IQN: iqn.2014-05.com.example.client:d87e063dd8d9
Example target IQN: iqn.2014-05.com.example.server:targetName

An IQN should start with 'iqn'. The next is a date stamp, consisting
of year and month separated by a dash. Most places I've seen IQNs,
they often refer to the month and year the target or client was set
up. Then there's this "reversed" hostname. And at the end, separated
by colon, a unique ID. For clients, I've mostly seen some random hex
numbers. For servers I've found it easier to parse information when
using a target name instead of another random hex number.


** iSCSI target setup **

Install scsi-target-utils. And prepare a /etc/tgtd/targets.conf similar
to this:
------------------------------------------
default-driver iscsi

<target iqn.2014-05.com.example.server:DiskBpart1>
backing-store /dev/sdb1
incominguser ExampleClient PlainTextPassword
initiator-address 192.168.1.10
</target>
------------------------------------------

The incominguser and initiator-address restricts access to this target.
This setup means that the iSCSI initiator (the client) must have the IP
address 192.168.1.10 and must identify itself with the username/password
as described in incominguser. Please note that the password is indeed
in plain text(!), so take that into consideration when setting up targets.

When this is done, you can start the tgtd service:

[***@server: ~]# service tgtd start

To verify your setup, you can run this command:

[***@server: ~]# tgtadm -m target --op show

Now ensure that TCP port 3260 is opened in your firewall, at least
allowing your iSCSI client. Also remember to check that tgtd is started
on boot (chkconfig tgtd on)


** iSCSI client/initiator setup **

Install the iscsi-initiator-utils package and set a unique initiator
name in /etc/iscsi/initiatorname.iscsi.

Ensure that no firewalls blocks your iSCSI client accessing your iSCSI
target server on TCP port 3260. I do mention it again, because I'm
often forgetting this little detail myself.

First, check that you can see some available targets:

[***@client: ~]# iscsiadm -m discovery --op show -t sendtargets \
-p 192.168.1.2
192.168.1.2:3260,1 iqn.2014-05.com.example.server:DiskBpart1

This will update a local database with available targets on this iSCSI
server. Please notice that the next steps will use 'discoverydb' and
not 'discovery'. This is easy to overlook and can cause some confusion.

We will now set the username and password used by iSCSI client to
connect to the iSCSI target.

[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--op update --name node.session.auth.username
--value ExampleClient
[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--op update --name node.session.auth.password
--value PlainTextPassword

Let's query this setup:

[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--op show

# BEGIN RECORD 2.0-872
node.name = iqn.2014-05.com.example.server:DiskBpart1
node.tpgt = 1
node.startup = manual
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
iface.net_ifacename = <empty>
iface.transport_name = tcp
iface.initiatorname = <empty>
node.discovery_address = 192.168.1.2
node.discovery_port = 3260
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.auth.authmethod = CHAP
node.session.auth.username = ExampleClient
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 192.168.1.2
node.conn[0].port = 3260
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD

Double check node.name, node.discovery_address,
node.session.auth.username and node.conn[0].address.

Now we can login and get access to this iSCSI target.

[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--login
Logging in to [iface: default, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
Login to [iface: default, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
successful.

Check dmesg, and you'll see a new /dev/sd? device being connected. Now
you can start using this device however you like.

To log out from an iSCSI session, you need to sure that nobody is using
this iSCSI target (unmounted, vgchange -yn VolumeGroup if you use LVM).
Then you can do this:

[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--logout
Logging out of session [sid: 13, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
Logout of [sid: 13, target: iqn.2014-05.com.example.server:DiskBpart,
portal: 192.168.1.2,3260] successful.

To see which iSCSI connections you have active, you can do this:

[***@client: ~]# iscsiadm -m session
tcp: [16] 192.168.1.2:3260,1 iqn.2014-05.com.example.server:DiskBpart

If node.startup is set to automatic (iscsiadm -m node --op show),
ScientificLinux should automatically try to connect to the configured
iSCSI server during boot. It would start the needed iscsi and iscsid
services by itself. If you don't want this, you can set node.startup to
manual.

On the iSCSI target server, you can also see if anyone is connected by
checking the output of tgtadm -m target --op show.


To completely clean up an iSCSI setup on the client side, these commands
are useful:

[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart --op delete
[***@client: ~]# iscsiadm -m discoverydb -p 192.168.1.2 \
--type sendtargets --op delete

The first command will delete all the setup related to a specific target
node. And the second command will delete the target discovery
information related to the given iSCSI server.


You can do a lot more too, but I won't cover that in this quick setup
guide. The man pages and --help screens to both iscsiadm and tgtd are
fairly good too.

If somebody else finds errors, mistakes or better ways to do things,
please enlighten us! :)


--
kind regards,

David Sommerseth
Post by Mahmood Naderan
Post by Lamar Owen
Hmmm, yes, it is possible, using iSCSI.
OK I will try that. Thanks
Regards,
Mahmood
Post by Lamar Owen
Is it possible to add a network drive to existing LVM? I have created
a group and have added three local drives. Now I want to add a remote
disk from another node. The remote node has an additional hard drive
and is mounted to /arch (remote node)
Is that possible? How? All examples I see are trying to add extra
local drives and not remote drives.
Hmmm, yes, it is possible, using iSCSI.
In order to do this correctly you would want the machine, that has the
additional drives, set up to be an iSCSI target, and then use the iSCSI
initiator on the first machine to attach to the disks; you can then add
those iSCSI disks to the volume group. I would use dedicated Gigabit
Ethernet NICs and point-to-point connections rather than trying to use
the existing ethernet ports, too. Oh, and you wouldn't have it mounted
on the second machine as such.
No, I can't give you a step-by-step, you'll have to do a bit of research
and you really really need to read up on and understand what iSCSI
brings to the party. I have an IA-64 box (SGI Altix) running CentOS 5.9
(my own hand-rebuild, bootstrapped up from SL CERN 5.4 IA-64) using an
EMC Clariion array's LUNs over iSCSI, so I have a bit of experience with
the initiator portion of the equation, but none at all with the target
portion, but I know that it does exist.
Mahmood Naderan
2014-05-09 05:29:00 UTC
Permalink
That is a complete guide David. Thanks.
I am following that.


 
Regards,
Mahmood



On Thursday, May 8, 2014 3:16 PM, David Sommerseth <sl+users-***@public.gmane.org> wrote:

** Terminology **

- iSCSI target
  The iSCSI server, it provides SCSI targets over the network.  In this
  guide, I'll use 192.168.1.2 as the server IP address.

- iSCSI initiator
  The iSCSI client, it initiates a connection to a SCSI
  target over the network.  I'll use the term "iSCSI client" instead,
  which is not strictly correct, but hopefully more understandable.  In
  this guide I'll use 192.168.1.10 as the client IP address.

- IQN ids
  A unique ID which represents a host providing or using an iSCSI target
  It's built up by 4 "blocks" of information.

  Example initiator IQN:  iqn.2014-05.com.example.client:d87e063dd8d9
  Example target IQN:    iqn.2014-05.com.example.server:targetName

  An IQN should start with 'iqn'.  The next is a date stamp, consisting
  of year and month separated by a dash.  Most places I've seen IQNs,
  they often refer to the month and year the target or client was set
  up.  Then there's this "reversed" hostname.  And at the end, separated
  by colon, a unique ID.  For clients, I've mostly seen some random hex
  numbers.  For servers I've found it easier to parse information when
  using a target name instead of another random hex number.


** iSCSI target setup **

Install scsi-target-utils.  And prepare a /etc/tgtd/targets.conf similar
to this:
------------------------------------------
default-driver iscsi

<target iqn.2014-05.com.example.server:DiskBpart1>
    backing-store /dev/sdb1
    incominguser ExampleClient PlainTextPassword
    initiator-address 192.168.1.10
</target>
------------------------------------------

The incominguser and initiator-address restricts access to this target.
This setup means that the iSCSI initiator (the client) must have the IP
address 192.168.1.10 and must identify itself with the username/password
as described in incominguser.  Please note that the password is indeed
in plain text(!), so take that into consideration when setting up targets.

When this is done, you can start the tgtd service:

  [***@server: ~]# service tgtd start

To verify your setup, you can run this command:

  [***@server: ~]# tgtadm -m target --op show

Now ensure that TCP port 3260 is opened in your firewall, at least
allowing your iSCSI client.  Also remember to check that tgtd is started
on boot (chkconfig tgtd on)


** iSCSI client/initiator setup **

Install the iscsi-initiator-utils package and set a unique initiator
name in /etc/iscsi/initiatorname.iscsi.

Ensure that no firewalls blocks your iSCSI client accessing your iSCSI
target server on TCP port 3260.  I do mention it again, because I'm
often forgetting this little detail myself.

First, check that you can see some available targets:

[***@client: ~]# iscsiadm -m discovery --op show -t sendtargets \
              -p 192.168.1.2
192.168.1.2:3260,1 iqn.2014-05.com.example.server:DiskBpart1

This will update a local database with available targets on this iSCSI
server.  Please notice that the next steps will use 'discoverydb' and
not 'discovery'.  This is easy to overlook and can cause some confusion.

We will now set the username and password used by iSCSI client to
connect to the iSCSI target.

[***@client: ~]# iscsiadm -m node \
              -T iqn.2014-05.com.example.server:DiskBpart1 \
              --op update --name node.session.auth.username
              --value ExampleClient
[***@client: ~]# iscsiadm -m node \
              -T iqn.2014-05.com.example.server:DiskBpart1 \
              --op update --name node.session.auth.password
              --value PlainTextPassword

Let's query this setup:

[***@client: ~]# iscsiadm -m node \
              -T iqn.2014-05.com.example.server:DiskBpart1 \
              --op show

# BEGIN RECORD 2.0-872
node.name = iqn.2014-05.com.example.server:DiskBpart1
node.tpgt = 1
node.startup = manual
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
iface.net_ifacename = <empty>
iface.transport_name = tcp
iface.initiatorname = <empty>
node.discovery_address = 192.168.1.2
node.discovery_port = 3260
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.auth.authmethod = CHAP
node.session.auth.username = ExampleClient
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 192.168.1.2
node.conn[0].port = 3260
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD

Double check node.name, node.discovery_address,
node.session.auth.username and node.conn[0].address.

Now we can login and get access to this iSCSI target.

[***@client: ~]# iscsiadm -m node \
              -T iqn.2014-05.com.example.server:DiskBpart1 \
              --login
Logging in to [iface: default, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
Login to [iface: default, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
successful.

Check dmesg, and you'll see a new /dev/sd? device being connected.  Now
you can start using this device however you like.

To log out from an iSCSI session, you need to sure that nobody is using
this iSCSI target (unmounted, vgchange -yn VolumeGroup if you use LVM).
Then you can do this:

[***@client: ~]# iscsiadm -m node \
              -T iqn.2014-05.com.example.server:DiskBpart1 \
              --logout
Logging out of session [sid: 13, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
Logout of [sid: 13, target: iqn.2014-05.com.example.server:DiskBpart,
portal: 192.168.1.2,3260] successful.

To see which iSCSI connections you have active, you can do this:

[***@client: ~]# iscsiadm -m session
tcp: [16] 192.168.1.2:3260,1 iqn.2014-05.com.example.server:DiskBpart

If node.startup is set to automatic (iscsiadm -m node --op show),
ScientificLinux should automatically try to connect to the configured
iSCSI server during boot.  It would start the needed iscsi and iscsid
services by itself.  If you don't want this, you can set node.startup to
manual.

On the iSCSI target server, you can also see if anyone is connected by
checking the output of tgtadm -m target --op show.


To completely clean up an iSCSI setup on the client side, these commands
are useful:

[***@client: ~]# iscsiadm -m node \
              -T  iqn.2014-05.com.example.server:DiskBpart --op delete
[***@client: ~]# iscsiadm -m discoverydb -p 192.168.1.2 \
              --type sendtargets --op delete

The first command will delete all the setup related to a specific target
node.  And the second command will delete the target discovery
information related to the given iSCSI server.


You can do a lot more too, but I won't cover that in this quick setup
guide.  The man pages and --help screens to both iscsiadm and tgtd are
fairly good too.

If somebody else finds errors, mistakes or better ways to do things,
please enlighten us! :)


--
kind regards,

David Sommerseth
Post by Mahmood Naderan
Post by Lamar Owen
Hmmm, yes, it is possible, using iSCSI.
OK I will try that. Thanks
 
Regards,
Mahmood
Post by Lamar Owen
Is it possible to add a network drive to existing LVM? I have created
a group and have added three local drives. Now I want to add a remote
disk from another node. The remote node has an additional hard drive
and is mounted to /arch (remote node)
Is that possible? How? All examples I see are trying to add extra
local drives and not remote drives.
Hmmm, yes, it is possible, using iSCSI.
In order to do this correctly you would want the machine, that has the
additional drives, set up to be an iSCSI target, and then use the iSCSI
initiator on the first machine to attach to the disks; you can then add
those iSCSI disks to the volume group.  I would use dedicated Gigabit
Ethernet NICs and point-to-point connections rather than trying to use
the existing ethernet ports, too.  Oh, and you wouldn't have it mounted
on the second machine as such.
No, I can't give you a step-by-step, you'll have to do a bit of research
and you really really need to read up on and understand what iSCSI
brings to the party.  I have an IA-64 box (SGI Altix) running CentOS 5.9
(my own hand-rebuild, bootstrapped up from SL CERN 5.4 IA-64) using an
EMC Clariion array's LUNs over iSCSI, so I have a bit of experience with
the initiator portion of the equation, but none at all with the target
portion, but I know that it does exist.
Lamar Owen
2014-05-10 16:23:31 UTC
Permalink
Post by David Sommerseth
** Terminology **
- iSCSI target
The iSCSI server, it provides SCSI targets over the network. In this
guide, I'll use 192.168.1.2 as the server IP address.
- iSCSI initiator
The iSCSI client, it initiates a connection to a SCSI
target over the network. I'll use the term "iSCSI client" instead,
which is not strictly correct, but hopefully more understandable. In
this guide I'll use 192.168.1.10 as the client IP address.
...

Very nice piece, David, and many thanks. While I had the initiator side
of that, I had no experience in the target side, and you piece covers
that well.

Continue reading on narkive:
Search results for 'Add a remote disk to LVM' (Questions and Answers)
10
replies
What is AIX Box?
started 2006-05-08 15:58:44 UTC
hardware
Loading...