Comment # 6 on bug 1109302 from
Meanwhile I got the issue back on a machine similar to the one reported in
bug#1094979. The boot process just stalls shortly after firewalld starts, even
though this system has a softraid spanning a local disk and iscsi.
I didn't change the firewalld setup on this machine but I made a package
update.
The system stops with the know error:

[   69.339822]  connection1:0: ping timeout of 5 secs expired, recv timeout 5,
last rx 4294907086, last ping 4294908353, now 4294909632
[   69.339834]  connection1:0: detected conn error (1022)
[   69.339836]  connection2:0: ping timeout of 5 secs expired, recv timeout 5,
last rx 4294907086, last ping 4294908344, now 4294909632
[   69.339838]  connection2:0: detected conn error (1022)

That causes multipath failures and some scsi errors:

-->
[  104.553438] sd 6:0:0:0: timing out command, waited 30s
[  104.553441] sd 6:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_IMM_RETRY
driverbyte=DRIVER_OK
[  104.553443] sd 6:0:0:0: [sdb] tag#0 CDB: Test Unit Ready 00 00 00 00 00 00
[  104.561429] sd 7:0:0:0: timing out command, waited 30s
[  104.561430] sd 7:0:0:0: [sdd] tag#1 FAILED Result: hostbyte=DID_IMM_RETRY
driverbyte=DRIVER_OK
[  104.561432] sd 7:0:0:0: [sdd] tag#1 CDB: Test Unit Ready 00 00 00 00 00 00
[  105.570312] device-mapper: multipath: Failing path 8:16.
[  105.570373] device-mapper: multipath: Failing path 8:32.
[  105.570419] device-mapper: multipath: Failing path 8:48.
[  105.570464] device-mapper: multipath: Failing path 8:64.
[  134.565149] sd 6:0:0:0: timing out command, waited 30s
[  134.565152] sd 6:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_IMM_RETRY
driverbyte=DRIVER_OK
[  134.565154] sd 6:0:0:0: [sdb] tag#0 CDB: Test Unit Ready 00 00 00 00 00 00
[  134.573116] sd 7:0:0:0: timing out command, waited 30s
[  134.573119] sd 7:0:0:0: [sdd] tag#1 FAILED Result: hostbyte=DID_IMM_RETRY
driverbyte=DRIVER_OK
[  134.573120] sd 7:0:0:0: [sdd] tag#1 CDB: Test Unit Ready 00 00 00 00 00 00
-->

I'm  not sure whether the scsi subsystem behaves correctly here, but that's out
of scope for this bug.

(In reply to Markos Chandras from comment #5)
> 
> did you try 'firewall-cmd --zone=public --permanent
> --change-interface=ibft0' or something like this? Because this is a
> permanent change, it needs a reload (or system reboot)
> you can check the active zone with 'firewall-cmd --get-active'.

I did that and it shows:

kvm133:~ #  firewall-cmd --get-active
public
  interfaces: ibft0 ibft1

Still the machine hangs at reboot.

> But iscsi-target normally refers to the server. It has little value when you
> add this service to the client.

Ok

> What's the zone of the ibftX interface?

It's public.

> I am not sure about how iscsi works, but firewalld allows established and
> related connections. So I am guessing, that your host first talks to the
> remote server (so this connection is NEW) and when the server replies back
> (state changed to ESTABLISHED) and firewalld should allow further
> communication. It's similar to every other outbound connection no?

Even with the above setup, the boot hangs after firewalld starts.
When I suppress the firewalld start and start it manually after the machine is
up, there is no issue and the iscsi connection works just fine.
I'm wondering whether there is a context with wicked which starts just before
firewalld. At this point, the iscsi connection via firmware is already active.
Maybe wicked triggers a reconnect that firewalld blocks?


You are receiving this mail because: