That is a complete guide David. Thanks.
I am following that.
Regards,
Mahmood
On Thursday, May 8, 2014 3:16 PM, David Sommerseth <sl+users-***@public.gmane.org> wrote:
** Terminology **
- iSCSI target
The iSCSI server, it provides SCSI targets over the network. In this
guide, I'll use 192.168.1.2 as the server IP address.
- iSCSI initiator
The iSCSI client, it initiates a connection to a SCSI
target over the network. I'll use the term "iSCSI client" instead,
which is not strictly correct, but hopefully more understandable. In
this guide I'll use 192.168.1.10 as the client IP address.
- IQN ids
A unique ID which represents a host providing or using an iSCSI target
It's built up by 4 "blocks" of information.
Example initiator IQN: iqn.2014-05.com.example.client:d87e063dd8d9
Example target IQN: iqn.2014-05.com.example.server:targetName
An IQN should start with 'iqn'. The next is a date stamp, consisting
of year and month separated by a dash. Most places I've seen IQNs,
they often refer to the month and year the target or client was set
up. Then there's this "reversed" hostname. And at the end, separated
by colon, a unique ID. For clients, I've mostly seen some random hex
numbers. For servers I've found it easier to parse information when
using a target name instead of another random hex number.
** iSCSI target setup **
Install scsi-target-utils. And prepare a /etc/tgtd/targets.conf similar
to this:
------------------------------------------
default-driver iscsi
<target iqn.2014-05.com.example.server:DiskBpart1>
backing-store /dev/sdb1
incominguser ExampleClient PlainTextPassword
initiator-address 192.168.1.10
</target>
------------------------------------------
The incominguser and initiator-address restricts access to this target.
This setup means that the iSCSI initiator (the client) must have the IP
address 192.168.1.10 and must identify itself with the username/password
as described in incominguser. Please note that the password is indeed
in plain text(!), so take that into consideration when setting up targets.
When this is done, you can start the tgtd service:
[***@server: ~]# service tgtd start
To verify your setup, you can run this command:
[***@server: ~]# tgtadm -m target --op show
Now ensure that TCP port 3260 is opened in your firewall, at least
allowing your iSCSI client. Also remember to check that tgtd is started
on boot (chkconfig tgtd on)
** iSCSI client/initiator setup **
Install the iscsi-initiator-utils package and set a unique initiator
name in /etc/iscsi/initiatorname.iscsi.
Ensure that no firewalls blocks your iSCSI client accessing your iSCSI
target server on TCP port 3260. I do mention it again, because I'm
often forgetting this little detail myself.
First, check that you can see some available targets:
[***@client: ~]# iscsiadm -m discovery --op show -t sendtargets \
-p 192.168.1.2
192.168.1.2:3260,1 iqn.2014-05.com.example.server:DiskBpart1
This will update a local database with available targets on this iSCSI
server. Please notice that the next steps will use 'discoverydb' and
not 'discovery'. This is easy to overlook and can cause some confusion.
We will now set the username and password used by iSCSI client to
connect to the iSCSI target.
[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--op update --name node.session.auth.username
--value ExampleClient
[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--op update --name node.session.auth.password
--value PlainTextPassword
Let's query this setup:
[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--op show
# BEGIN RECORD 2.0-872
node.name = iqn.2014-05.com.example.server:DiskBpart1
node.tpgt = 1
node.startup = manual
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
iface.net_ifacename = <empty>
iface.transport_name = tcp
iface.initiatorname = <empty>
node.discovery_address = 192.168.1.2
node.discovery_port = 3260
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.auth.authmethod = CHAP
node.session.auth.username = ExampleClient
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 192.168.1.2
node.conn[0].port = 3260
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD
Double check node.name, node.discovery_address,
node.session.auth.username and node.conn[0].address.
Now we can login and get access to this iSCSI target.
[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--login
Logging in to [iface: default, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
Login to [iface: default, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
successful.
Check dmesg, and you'll see a new /dev/sd? device being connected. Now
you can start using this device however you like.
To log out from an iSCSI session, you need to sure that nobody is using
this iSCSI target (unmounted, vgchange -yn VolumeGroup if you use LVM).
Then you can do this:
[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart1 \
--logout
Logging out of session [sid: 13, target:
iqn.2014-05.com.example.server:DiskBpart, portal: 192.168.1.2,3260]
Logout of [sid: 13, target: iqn.2014-05.com.example.server:DiskBpart,
portal: 192.168.1.2,3260] successful.
To see which iSCSI connections you have active, you can do this:
[***@client: ~]# iscsiadm -m session
tcp: [16] 192.168.1.2:3260,1 iqn.2014-05.com.example.server:DiskBpart
If node.startup is set to automatic (iscsiadm -m node --op show),
ScientificLinux should automatically try to connect to the configured
iSCSI server during boot. It would start the needed iscsi and iscsid
services by itself. If you don't want this, you can set node.startup to
manual.
On the iSCSI target server, you can also see if anyone is connected by
checking the output of tgtadm -m target --op show.
To completely clean up an iSCSI setup on the client side, these commands
are useful:
[***@client: ~]# iscsiadm -m node \
-T iqn.2014-05.com.example.server:DiskBpart --op delete
[***@client: ~]# iscsiadm -m discoverydb -p 192.168.1.2 \
--type sendtargets --op delete
The first command will delete all the setup related to a specific target
node. And the second command will delete the target discovery
information related to the given iSCSI server.
You can do a lot more too, but I won't cover that in this quick setup
guide. The man pages and --help screens to both iscsiadm and tgtd are
fairly good too.
If somebody else finds errors, mistakes or better ways to do things,
please enlighten us! :)
--
kind regards,
David Sommerseth
Post by Mahmood NaderanPost by Lamar OwenHmmm, yes, it is possible, using iSCSI.
OK I will try that. Thanks
Regards,
Mahmood
Post by Lamar OwenIs it possible to add a network drive to existing LVM? I have created
a group and have added three local drives. Now I want to add a remote
disk from another node. The remote node has an additional hard drive
and is mounted to /arch (remote node)
Is that possible? How? All examples I see are trying to add extra
local drives and not remote drives.
Hmmm, yes, it is possible, using iSCSI.
In order to do this correctly you would want the machine, that has the
additional drives, set up to be an iSCSI target, and then use the iSCSI
initiator on the first machine to attach to the disks; you can then add
those iSCSI disks to the volume group. I would use dedicated Gigabit
Ethernet NICs and point-to-point connections rather than trying to use
the existing ethernet ports, too. Oh, and you wouldn't have it mounted
on the second machine as such.
No, I can't give you a step-by-step, you'll have to do a bit of research
and you really really need to read up on and understand what iSCSI
brings to the party. I have an IA-64 box (SGI Altix) running CentOS 5.9
(my own hand-rebuild, bootstrapped up from SL CERN 5.4 IA-64) using an
EMC Clariion array's LUNs over iSCSI, so I have a bit of experience with
the initiator portion of the equation, but none at all with the target
portion, but I know that it does exist.