IDE performance problems under Suse 7.2
I recently switched/upgraded from Caldera OpenLinux 2.3 (kernel 2.2.10) to Suse Professional 7.2. While there's much I like in the new setup, hard drive performance isn't on the list! Indeed, with the default Suse pre-compiled 2.4.4 kernel, hard drive performance seems abysmal. (I have a Maxtor IDE 91741U4 drive - a 17 gig UDMA66 drive.) Subjectively, the system seems to perform as if drive caching were disabled, with lots of delays for drive reads to complete, the drive light on frequently, and grinding noises coming from the drive. Since I still have OpenLinux 2.3 installed and bootable on the same drive, its easy for me to switch back and forth between Suse and Caldera, just to confirm that the drive performs much better under Caldera. (The 2.4.4 kernel is already set to use dma, and hdparm does nothing to improve performance.) I tried switching from the 2.4.4 kernel to the pre-compiled 2.2.19 kernel which comes with Suse 7.2, and indeed under 2.2.19, drive performance "feels" much better. I also tried compiling my own 2.2.19 and 2.2.18 kernels, to see if that would further improve the drive performance. Here are the results - using "hdparm -t": Results for "hdparm -t" (rate for buffered disk reads of 64 MB): Suse compiled 2.2.19 kernel ~ 9.5 MB/sec self-compiled 2.2.19 kernel ~11.5 MB/sec self-compiled 2.2.18 kernel ~11.5 MB/sec Caldera complied 2.2.10 kernel ~15.5 MB/sec For the record, this is an AMD K6-2 system running at 450mhz on a TYAN Trinity 100AT (S1590S) motherboard. The chipset is a VIA MVP3. "free" reports 322 MB main memory installed. I'm confused on a number of counts: - Why does the 2.4.4 performance feel so poor? Is this the infamous 2.4 kernel series virtual memory (vm) problem? A search on deja.com turned up several references to via chipset problems under 2.4 - maybe that's what is responsible? - Why would the Suse pre-compiled 2.2.19 kernel perform so much worse than a self-compiled kernel? (I couldn't find the 2.2.19 sources on the suse web site, so I downloaded the sources from kernel.org) - Finally, what do I need to do to get the ide performance under 2.2.19 back to what it was under 2.2.10? Thanks, Dan Greenberg Ann Arbor, MI USA
Here's my setup: hdparm -u1 -c3 -m 16 /dev/hda check /proc/ide/hda/settings in both SuSE and Caldera. Compare them, post them here even, and then we can discuss what's the problem... whether it's distro related or settings related. On 4 Sep 2001, Daniel Greenberg wrote:
I recently switched/upgraded from Caldera OpenLinux 2.3 (kernel 2.2.10) to Suse Professional 7.2.
While there's much I like in the new setup, hard drive performance isn't on the list! Indeed, with the default Suse pre-compiled 2.4.4 kernel, hard drive performance seems abysmal. (I have a Maxtor IDE 91741U4 drive - a 17 gig UDMA66 drive.) Subjectively, the system seems to perform as if drive caching were disabled, with lots of delays for drive reads to complete, the drive light on frequently, and grinding noises coming from the drive. Since I still have OpenLinux 2.3 installed and bootable on the same drive, its easy for me to switch back and forth between Suse and Caldera, just to confirm that the drive performs much better under Caldera. (The 2.4.4 kernel is already set to use dma, and hdparm does nothing to improve performance.)
I tried switching from the 2.4.4 kernel to the pre-compiled 2.2.19 kernel which comes with Suse 7.2, and indeed under 2.2.19, drive performance "feels" much better. I also tried compiling my own 2.2.19 and 2.2.18 kernels, to see if that would further improve the drive performance. Here are the results - using "hdparm -t":
Results for "hdparm -t" (rate for buffered disk reads of 64 MB):
Suse compiled 2.2.19 kernel ~ 9.5 MB/sec self-compiled 2.2.19 kernel ~11.5 MB/sec self-compiled 2.2.18 kernel ~11.5 MB/sec Caldera complied 2.2.10 kernel ~15.5 MB/sec
For the record, this is an AMD K6-2 system running at 450mhz on a TYAN Trinity 100AT (S1590S) motherboard. The chipset is a VIA MVP3. "free" reports 322 MB main memory installed.
I'm confused on a number of counts:
- Why does the 2.4.4 performance feel so poor? Is this the infamous 2.4 kernel series virtual memory (vm) problem? A search on deja.com turned up several references to via chipset problems under 2.4 - maybe that's what is responsible?
- Why would the Suse pre-compiled 2.2.19 kernel perform so much worse than a self-compiled kernel? (I couldn't find the 2.2.19 sources on the suse web site, so I downloaded the sources from kernel.org)
- Finally, what do I need to do to get the ide performance under 2.2.19 back to what it was under 2.2.10? -- noodlez: Karol Pietrzak PGP KeyID: 0x3A1446A0
For comparison: bash-2.05# hdparm -tT /dev/hdd /dev/hdd: Timing buffer-cache reads: 128 MB in 1.23 seconds =104.07 MB/sec Timing buffered disk reads: 64 MB in 2.26 seconds = 28.32 MB/sec bash-2.05# cat /proc/version Linux version 2.4.4-4GB (root@Pentium.suse.de) (gcc version 2.95.3 20010315 (SuSE)) #1 Sat Jun 23 05:26:59 GMT 2001 This is a PIII-600 with 80GB Maxtor 98196H8 IDE Hard Disks, running on SuSE Linux 7.2 with the default SuSE 2.4 kernel, with: hdparm -d1 -u0 -c0 -m16 /dev/hdd. I found that changing values other than DMA made very little difference to speed but turning off DMA made a significant difference: bash-2.05# hdparm -d0 /dev/hdd /dev/hdd: setting using_dma to 0 (off) using_dma = 0 (off) bash-2.05# hdparm -tT /dev/hdd /dev/hdd: Timing buffer-cache reads: 128 MB in 1.22 seconds =104.92 MB/sec Timing buffered disk reads: 64 MB in 19.84 seconds = 3.23 MB/sec Be aware that the timings cary from run to run - you might want to take an average of 5 tests. -- Simon Oliver
On Wed, Sep 05, 2001 at 01:43:21PM +0100, Simon Oliver wrote:
For comparison:
bash-2.05# hdparm -tT /dev/hdd
/dev/hdd: Timing buffer-cache reads: 128 MB in 1.23 seconds =104.07 MB/sec Timing buffered disk reads: 64 MB in 2.26 seconds = 28.32 MB/sec
bash-2.05# cat /proc/version Linux version 2.4.4-4GB (root@Pentium.suse.de) (gcc version 2.95.3 20010315 (SuSE)) #1 Sat Jun 23 05:26:59 GMT 2001
This is a PIII-600 with 80GB Maxtor 98196H8 IDE Hard Disks, running on SuSE Linux 7.2 with the default SuSE 2.4 kernel, with: hdparm -d1 -u0 -c0 -m16 /dev/hdd. I found that changing values other than DMA made very little difference to speed but turning off DMA made a significant difference:
bash-2.05# hdparm -d0 /dev/hdd
/dev/hdd: setting using_dma to 0 (off) using_dma = 0 (off) bash-2.05# hdparm -tT /dev/hdd
/dev/hdd: Timing buffer-cache reads: 128 MB in 1.22 seconds =104.92 MB/sec Timing buffered disk reads: 64 MB in 19.84 seconds = 3.23 MB/sec
Be aware that the timings cary from run to run - you might want to take an average of 5 tests.
I just realised that it might be a VIA chipset/Fujitsu HD problem
hdparm -tT /dev/hda
/dev/hda: Timing buffer-cache reads: 128 MB in 3.09 seconds = 41.42 MB/sec Timing buffered disk reads: 64 MB in 7.43 seconds = 8.61 MB/sec hdparm -v /dev/hda /dev/hda: multcount = 16 (on) I/O support = 1 (32-bit) unmaskirq = 1 (on) using_dma = 1 (on) keepsettings = 0 (off) nowerr = 0 (off) readonly = 0 (off) readahead = 8 (on) geometry = 2491/255/63, sectors = 40032696, start = 0 hdparm -i /dev/hda /dev/hda: Model=FUJITSU MPE3204AT, FwRev=ED-03-04, SerialNo=05000283 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=unknown, BuffSize=512kB, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=40032696 IORDY=yes, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 Drive Supports : ATA-1 ATA-2 ATA-3 ATA-4 Kernel Drive Geometry LogicalCHS=2491/255/63 PhysicalCHS=39714/16/63 Motherboard is FIC PA-2013 (VIA Apollo MVP3) with AMD-K6/2-500 CPU I run SuSE stock 2.4.2-4GB kernel on 7.1 system Is it really a bad mobo/HD combination or result of bad configuration? -Kastus
-- Simon Oliver
On September 5, 2001 05:10 pm, Konstantin (Kastus) Shchuka wrote:
I just realised that it might be a VIA chipset/Fujitsu HD problem
The kernel has some options for different chipsets including some from Via. Selecting it on my system really helped. I don't know if it supports your Via chipset. Best to check. Nick
On Wed, 5 Sep 2001, Nick Zentena wrote:
On September 5, 2001 05:10 pm, Konstantin (Kastus) Shchuka wrote:
I just realised that it might be a VIA chipset/Fujitsu HD problem
The kernel has some options for different chipsets including some from Via. Selecting it on my system really helped. I don't know if it supports your Via chipset. Best to check.
Nick
Thanks Nick. I'm using a generic 2.2.19 kernel from kernel.org. The only option in make xconfig related to chipsets is "pci quirks." I'll check kernel.org to see if there are pci patches for via chips. Dan Greenberg Ann Arbor, MI
On September 5, 2001 08:58 pm, Daniel Greenberg wrote:
On Wed, 5 Sep 2001, Nick Zentena wrote:
On September 5, 2001 05:10 pm, Konstantin (Kastus) Shchuka wrote:
I just realised that it might be a VIA chipset/Fujitsu HD problem
The kernel has some options for different chipsets including some from Via. Selecting it on my system really helped. I don't know if it supports your Via chipset. Best to check.
Nick
Thanks Nick. I'm using a generic 2.2.19 kernel from kernel.org. The only option in make xconfig related to chipsets is "pci quirks." I'll check kernel.org to see if there are pci patches for via chips.
I don't have a copy of the 2.2.x kernel here now but under 2.4.x ATA/IDE/MFM/RLL support then IDE, ATA and ATAPI Block devices IDE chipset support/bugfixes you'll find: VIA82CXXX chipset support Nick
On Wed, 5 Sep 2001, Nick Zentena wrote:
On September 5, 2001 08:58 pm, Daniel Greenberg wrote:
On Wed, 5 Sep 2001, Nick Zentena wrote:
On September 5, 2001 05:10 pm, Konstantin (Kastus) Shchuka wrote:
I just realised that it might be a VIA chipset/Fujitsu HD problem
The kernel has some options for different chipsets including some from Via. Selecting it on my system really helped. I don't know if it supports your Via chipset. Best to check.
Nick
Thanks Nick. I'm using a generic 2.2.19 kernel from kernel.org. The only option in make xconfig related to chipsets is "pci quirks." I'll check kernel.org to see if there are pci patches for via chips.
I don't have a copy of the 2.2.x kernel here now but under 2.4.x ATA/IDE/MFM/RLL support then IDE, ATA and ATAPI Block devices
IDE chipset support/bugfixes
you'll find:
VIA82CXXX chipset support
Nick
--
Thanks Nick. I downloaded the ide patches for 2.2.19 from www.kernel.org/pub/linux/kernel/people/hedrick/ide-2.2.19. After applying the patch, "make xconfig" showed new options under "Block Devices." I selected the VIA82CXXX driver and recompiled and . . . drive performance *declined* by another 2 MB/SEC or so as measured by hdparm -t. (So I saw a drop from around 11.5 MB/SEC to around 9.5 MB/SEC with the via driver. I ran the test 5 or 6 times and took the averages.) This is really weird. I wonder if there are other options I need to select/deselect to get the via patch to work properly? For example, I kept the "pci quirks" option selected, maybe this needs to be turned off if the via patch is being used? I guess I need to do more experimenting, but I'm on my 9th or 10th kernel recompile of the last couple days. (Not that I'm complaining.) I do find myself thinking though that at some point it might be easier to try another motherboard/chipset combination (with the same hardrive) and see what happens to drive performance. (Do all the AMD K6-II compatible motherboards use via chipsets?) Dan Greenberg Ann Arbor, MI
On Wed, 5 Sep 2001, Konstantin (Kastus) Shchuka wrote:
On Wed, Sep 05, 2001 at 01:43:21PM +0100, Simon Oliver wrote:
For comparison:
bash-2.05# hdparm -tT /dev/hdd
/dev/hdd: Timing buffer-cache reads: 128 MB in 1.23 seconds =104.07 MB/sec Timing buffered disk reads: 64 MB in 2.26 seconds = 28.32 MB/sec
bash-2.05# cat /proc/version Linux version 2.4.4-4GB (root@Pentium.suse.de) (gcc version 2.95.3 20010315 (SuSE)) #1 Sat Jun 23 05:26:59 GMT 2001
This is a PIII-600 with 80GB Maxtor 98196H8 IDE Hard Disks, running on SuSE Linux 7.2 with the default SuSE 2.4 kernel, with: hdparm -d1 -u0 -c0 -m16 /dev/hdd. I found that changing values other than DMA made very little difference to speed but turning off DMA made a significant difference:
bash-2.05# hdparm -d0 /dev/hdd
/dev/hdd: setting using_dma to 0 (off) using_dma = 0 (off) bash-2.05# hdparm -tT /dev/hdd
/dev/hdd: Timing buffer-cache reads: 128 MB in 1.22 seconds =104.92 MB/sec Timing buffered disk reads: 64 MB in 19.84 seconds = 3.23 MB/sec
Be aware that the timings cary from run to run - you might want to take an average of 5 tests.
I just realised that it might be a VIA chipset/Fujitsu HD problem
hdparm -tT /dev/hda
/dev/hda: Timing buffer-cache reads: 128 MB in 3.09 seconds = 41.42 MB/sec Timing buffered disk reads: 64 MB in 7.43 seconds = 8.61 MB/sec
hdparm -v /dev/hda
/dev/hda: multcount = 16 (on) I/O support = 1 (32-bit) unmaskirq = 1 (on) using_dma = 1 (on) keepsettings = 0 (off) nowerr = 0 (off) readonly = 0 (off) readahead = 8 (on) geometry = 2491/255/63, sectors = 40032696, start = 0
hdparm -i /dev/hda
/dev/hda:
Model=FUJITSU MPE3204AT, FwRev=ED-03-04, SerialNo=05000283 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=unknown, BuffSize=512kB, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=40032696 IORDY=yes, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 Drive Supports : ATA-1 ATA-2 ATA-3 ATA-4 Kernel Drive Geometry LogicalCHS=2491/255/63 PhysicalCHS=39714/16/63
Motherboard is FIC PA-2013 (VIA Apollo MVP3) with AMD-K6/2-500 CPU I run SuSE stock 2.4.2-4GB kernel on 7.1 system
Is it really a bad mobo/HD combination or result of bad configuration?
Kastus: I wondering the same thing, but not sure how to determine this. Dan Greenberg Ann Arbor MI
-Kastus
-- Simon Oliver
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/support/faq and the archives at http://lists.suse.com
Thanks Oliver On Wed, 5 Sep 2001, Simon Oliver wrote:
For comparison:
bash-2.05# hdparm -tT /dev/hdd
/dev/hdd: Timing buffer-cache reads: 128 MB in 1.23 seconds =104.07 MB/sec Timing buffered disk reads: 64 MB in 2.26 seconds = 28.32 MB/sec
Much better numbers than what I am getting
bash-2.05# cat /proc/version Linux version 2.4.4-4GB (root@Pentium.suse.de) (gcc version 2.95.3 20010315 (SuSE)) #1 Sat Jun 23 05:26:59 GMT 2001
This is a PIII-600 with 80GB Maxtor 98196H8 IDE Hard Disks, running on SuSE Linux 7.2 with the default SuSE 2.4 kernel, with: hdparm -d1 -u0 -c0 -m16 /dev/hdd. I found that changing values other than DMA made very little difference to speed but turning off DMA made a significant difference:
I found the same thing
bash-2.05# hdparm -d0 /dev/hdd
/dev/hdd: setting using_dma to 0 (off) using_dma = 0 (off) bash-2.05# hdparm -tT /dev/hdd
/dev/hdd: Timing buffer-cache reads: 128 MB in 1.22 seconds =104.92 MB/sec Timing buffered disk reads: 64 MB in 19.84 seconds = 3.23 MB/sec
Be aware that the timings cary from run to run - you might want to take an average of 5 tests.
That's how I have been doing it. Thanks Dan Greenberg Ann Arbor, MI
-- Simon Oliver
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/support/faq and the archives at http://lists.suse.com
Thanks Karol. This is /proc/ide/hdb/settings in Caldera 2.2.10 after running "hdparm -X66 -d1 -u1 -m16 -c3 /dev/hdb" : name value min max mode ---- ----- --- --- ---- bios_cyl 2116 0 65535 rw bios_head 255 0 255 rw bios_sect 63 0 63 rw breada_readahead 4 0 127 rw bswap 0 0 1 r file_readahead 124 0 2097151 rw io_32bit 3 0 3 rw keepsettings 0 0 1 rw max_kb_per_request 64 1 127 rw multcount 8 0 8 rw nice1 1 0 1 rw nowerr 0 0 1 rw pio_mode write-only 0 255 w slow 0 0 1 rw unmaskirq 1 0 1 rw using_dma 1 0 1 rw And here's what it looks like in 2.2.19 after running "hdparm -X66 -d1 -u1 -m16 -c3 /dev/hdb": name value min max mode ---- ----- --- --- ---- bios_cyl 2116 0 65535 rw bios_head 255 0 255 rw bios_sect 63 0 63 rw breada_readahead 4 0 127 rw bswap 0 0 1 r file_readahead 124 0 2097151 rw io_32bit 3 0 3 rw keepsettings 0 0 1 rw max_kb_per_request 64 1 127 rw multcount 8 0 8 rw nice1 1 0 1 rw nowerr 0 0 1 rw pio_mode write-only 0 255 w slow 0 0 1 rw unmaskirq 1 0 1 rw using_dma 1 0 1 rw They are identical! (But 2.2.19 performs worse.) -Dan Greenberg Ann Arbor MI On Wed, 5 Sep 2001, Karol Pietrzak wrote:
Here's my setup:
hdparm -u1 -c3 -m 16 /dev/hda
check /proc/ide/hda/settings in both SuSE and Caldera. Compare them, post them here even, and then we can discuss what's the problem... whether it's distro related or settings related.
On 4 Sep 2001, Daniel Greenberg wrote:
I recently switched/upgraded from Caldera OpenLinux 2.3 (kernel 2.2.10) to Suse Professional 7.2.
While there's much I like in the new setup, hard drive performance isn't on the list! Indeed, with the default Suse pre-compiled 2.4.4 kernel, hard drive performance seems abysmal. (I have a Maxtor IDE 91741U4 drive - a 17 gig UDMA66 drive.) Subjectively, the system seems to perform as if drive caching were disabled, with lots of delays for drive reads to complete, the drive light on frequently, and grinding noises coming from the drive. Since I still have OpenLinux 2.3 installed and bootable on the same drive, its easy for me to switch back and forth between Suse and Caldera, just to confirm that the drive performs much better under Caldera. (The 2.4.4 kernel is already set to use dma, and hdparm does nothing to improve performance.)
I tried switching from the 2.4.4 kernel to the pre-compiled 2.2.19 kernel which comes with Suse 7.2, and indeed under 2.2.19, drive performance "feels" much better. I also tried compiling my own 2.2.19 and 2.2.18 kernels, to see if that would further improve the drive performance. Here are the results - using "hdparm -t":
Results for "hdparm -t" (rate for buffered disk reads of 64 MB):
Suse compiled 2.2.19 kernel ~ 9.5 MB/sec self-compiled 2.2.19 kernel ~11.5 MB/sec self-compiled 2.2.18 kernel ~11.5 MB/sec Caldera complied 2.2.10 kernel ~15.5 MB/sec
For the record, this is an AMD K6-2 system running at 450mhz on a TYAN Trinity 100AT (S1590S) motherboard. The chipset is a VIA MVP3. "free" reports 322 MB main memory installed.
I'm confused on a number of counts:
- Why does the 2.4.4 performance feel so poor? Is this the infamous 2.4 kernel series virtual memory (vm) problem? A search on deja.com turned up several references to via chipset problems under 2.4 - maybe that's what is responsible?
- Why would the Suse pre-compiled 2.2.19 kernel perform so much worse than a self-compiled kernel? (I couldn't find the 2.2.19 sources on the suse web site, so I downloaded the sources from kernel.org)
- Finally, what do I need to do to get the ide performance under 2.2.19 back to what it was under 2.2.10? -- noodlez: Karol Pietrzak PGP KeyID: 0x3A1446A0
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/support/faq and the archives at http://lists.suse.com
Hmm, yes they are identical. Here are the differences: o Distros: caldera / suse o Settings: same o kernel configs: caldera 2.2.10 / suse 2.2.19 Your check knocks off number 2 as the culprit. Better check the kernel configs now. Here's probably the easiest way. Get both configs in the same folder and do a diff. I'm pretty sure diff supports standard input so here's what you should do to not have to scan through manually: diff `cat vmlinuz-2.2.10-config | grep -v "^#"` `cat vmlinuz- 2.2.19-config | grep -v "^#"` If that fails, do a regular diff and then scan through manually :( On 5 Sep 2001, Daniel Greenberg wrote:
Thanks Karol.
This is /proc/ide/hdb/settings in Caldera 2.2.10 after running "hdparm -X66 -d1 -u1 -m16 -c3 /dev/hdb" :
name value min max mode ---- ----- --- --- ---- bios_cyl 2116 0 65535 rw bios_head 255 0 255 rw bios_sect 63 0 63 rw breada_readahead 4 0 127 rw bswap 0 0 1 r file_readahead 124 0 2097151 rw io_32bit 3 0 3 rw keepsettings 0 0 1 rw max_kb_per_request 64 1 127 rw multcount 8 0 8 rw nice1 1 0 1 rw nowerr 0 0 1 rw pio_mode write-only 0 255 w slow 0 0 1 rw unmaskirq 1 0 1 rw using_dma 1 0 1 rw
And here's what it looks like in 2.2.19 after running "hdparm -X66 -d1 -u1 -m16 -c3 /dev/hdb":
name value min max mode ---- ----- --- --- ---- bios_cyl 2116 0 65535 rw bios_head 255 0 255 rw bios_sect 63 0 63 rw breada_readahead 4 0 127 rw bswap 0 0 1 r file_readahead 124 0 2097151 rw io_32bit 3 0 3 rw keepsettings 0 0 1 rw max_kb_per_request 64 1 127 rw multcount 8 0 8 rw nice1 1 0 1 rw nowerr 0 0 1 rw pio_mode write-only 0 255 w slow 0 0 1 rw unmaskirq 1 0 1 rw using_dma 1 0 1 rw
They are identical! (But 2.2.19 performs worse.) -- noodlez: Karol Pietrzak PGP KeyID: 0x3A1446A0
participants (5)
-
Daniel Greenberg
-
Karol Pietrzak
-
Konstantin (Kastus) Shchuka
-
Nick Zentena
-
Simon Oliver