
Has anyone had any experience with 10GigE? I'm not after anything at all complex, just a single server that needs something a little faster than GigE talking to a single switch that can spread the bandwidth over 12+ GigE ports. As an aside, I think it would be nice if someone developed an Ethernet card and switch as one combined device. It would be a PCIe card that looks to the system like a regular Ethernet port connected to a 4 port switch with the only software visible difference being that it had 4* the bandwidth of a regular Ethernet port. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Do you mean something like this? http://www.startech.com/Networking-IO/Adapter-Cards/4-Port-PCI-Express-Gigab... R. On 28 May 2014 22:16, Russell Coker <russell@coker.com.au> wrote:
Has anyone had any experience with 10GigE?
I'm not after anything at all complex, just a single server that needs something a little faster than GigE talking to a single switch that can spread the bandwidth over 12+ GigE ports.
As an aside, I think it would be nice if someone developed an Ethernet card and switch as one combined device. It would be a PCIe card that looks to the system like a regular Ethernet port connected to a 4 port switch with the only software visible difference being that it had 4* the bandwidth of a regular Ethernet port.
-- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/ _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- ============================== || Rasika Amarasiri, PhD || Rasika.Amarasiri@gmail.com ==============================

On Wed, 28 May 2014, Rasika Amarasiri <rasika.amarasiri@gmail.com> wrote:
Do you mean something like this? http://www.startech.com/Networking-IO/Adapter-Cards/4-Port-PCI-Express-Giga bit-Ethernet-NIC-Network-Adapter-Card~ST1000SPEX4
No, that's a 4 Ethernet port card, totally different. I want a card that looks like a single port to the OS but has a switch built in and provides 4* the bandwidth through a single port. I could get a similar result through software by running bridging on a 4-port card (and may end up doing that). But that has other issues. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 28/05/2014 10:16 PM, Russell Coker wrote:
Has anyone had any experience with 10GigE?
I'm not after anything at all complex, just a single server that needs something a little faster than GigE talking to a single switch that can spread the bandwidth over 12+ GigE ports. The Intel 10GigE cards are well supported under Linux. There is the X520 series with Direct Attach SFP+/Twinax and the X540 supports 10GBASE-T over Cat6 (55m) / Cat6a (100m). Both come in single and dual port versions.
The signal processing and coding for 10GBASE-T is very intense; hence the X540 has a heatsink rivaling that of a low end video card. DA-SFP+ is the preferred cabling method for 10GigE, it isn't as power intensive, and is a bit cheaper too. The downside is above a couple of metres you need active cables which get a bit more expensive
As an aside, I think it would be nice if someone developed an Ethernet card and switch as one combined device. It would be a PCIe card that looks to the system like a regular Ethernet port connected to a 4 port switch with the only software visible difference being that it had 4* the bandwidth of a regular Ethernet port.
Have you looked at 802.3ad / Link Aggregation? This might achieve the outcome you want - more throughput when you have many clients taking to the same server, without the expense of 10GigE NICs and switches.

On Thu, 29 May 2014, "Mathew McBride" <matt@bionicmessage.net> wrote:
The Intel 10GigE cards are well supported under Linux. There is the X520 series with Direct Attach SFP+/Twinax and the X540 supports 10GBASE-T over Cat6 (55m) / Cat6a (100m). Both come in single and dual port versions.
The signal processing and coding for 10GBASE-T is very intense; hence the X540 has a heatsink rivaling that of a low end video card. DA-SFP+ is the preferred cabling method for 10GigE, it isn't as power intensive, and is a bit cheaper too. The downside is above a couple of metres you need active cables which get a bit more expensive
http://www.staticice.com.au/cgi-bin/search.cgi?q=intel+sfp%2B&spos=1 Thanks for that information, I did a search on staticice and got $73 for a 1M cable as the cheapest price. Yikes!
As an aside, I think it would be nice if someone developed an Ethernet card and switch as one combined device. It would be a PCIe card that looks to the system like a regular Ethernet port connected to a 4 port switch with the only software visible difference being that it had 4* the bandwidth of a regular Ethernet port.
Have you looked at 802.3ad / Link Aggregation? This might achieve the outcome you want - more throughput when you have many clients taking to the same server, without the expense of 10GigE NICs and switches.
Staticice has quad port PCIe cards for $384, dual port for $99, and single port for $11. It seems that some form of bridging or bonding will be the best option. I only have one client who has real problems with GigE speed, and their problem is that when some users run batch jobs the other users get poor performance, and the cheapest solution to that will be to provide a GigE port on the server for each user who runs such batch jobs. Thanks for all the other information on 10GigE. It seems that this isn't a good choice for my budget conscious clients at this time. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wed, 28 May 2014 10:16:39 PM Russell Coker wrote:
Has anyone had any experience with 10GigE?
Yeah, we've been using it since 2010. The biggest issue we had was with a big Force10 C300 switch that was intended to be a router but failed to implement sending PMTUD correctly (at all). Caused chaos until we realised what was happening and set up routes with specified MTUs to avoid PMTUD. The 2012 upgrade obsoleted that switch as all our heavy internal networking is now over QDR and FDR14 Infiniband instead. Our external connectivity is 10gigE and we use both Intel and Mellanox 10gigE NICs under Debian Wheezy and RHEL6 without issues. We've did have issues with RHEL5 and Mellanox 10gigE cards where (for example) traffic from VLAN4 arriving on the fibre plugged into eth1 appear on the interface eth0, even if there is no fibre plugged into eth0 and the kernel has reported the interface as down. Unplugging eth1 stops the traffic arriving on eth0. Traffic that is meant to arrive on eth0 appears on eth0 as expected and does go away if eth0 is unplugged. We reported that against RHEL5.5 (November 2010) and it finally got fixed in RHEL5.9 (January 2013), 26 months later, released just after the hardware in question got decomissioned. :-( How's that? cheers, Chris - -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQEVAwUBU4c03o1yjaOTJg85AQKIvgf+K4lk9xBMq15jRjH91Xp7qi/N2ss52Efg iCuADi2t+rwIHRa1g0Y8bgvBNXG1aMOSMe2Zy6p0WybB+BJlvNsZ7Ue1kfl+duKO rv2c257R3gjW0q5K4NxToeW7V0EE1E/pZ35P/3zL+Ns//BLvb28eWw8J8fY+IC+r izcIGB1g3slGuEfOoPzDfg4abI89TI/NI6n2ufNnBU5g8xOSWA+P4oY9M4KpY/SC uBoxwRdefw1ToEBP3xXt5Tu48ZLhTOsLXaW+eRTdceRTW3flxH4Ck2SU2ciHh4+2 Vz3Y5iTAtQSUgnGrRIwzcFiIZCA/6Rsm+Sa8mHcXSHcQATp/tFqlhA== =r/kZ -----END PGP SIGNATURE-----

On 28-May-14 10:16 PM, Russell Coker wrote:
Has anyone had any experience with 10GigE?
Apart from Intel, Chelsio are another well-respected 10GigE name. They are well-supported under Linux and have coped well with saturation testing at my old employer's R&D benchmarks (capable of sustained and stable 1300Gbps throughput, way beyond what most people need). I haven't worked with them for a few years so my memory of their specs is now a little hazy, have a gander through www.chelsio.com, They aren't cheap, but high-end quality rarely is. -- Email to luv-sub@tripleg.net.au will bounce. Email to george at the same domain will accept.

capable of sustained and stable 1300Gbps I assume you mean 1300 GBps - but how is that possible as the maximum throughput for 10Gb Ethernet is 1250GBps (10000/8)? Does Chelsio use a compression technology?
Cheers Duncan On 30 May 2014 22:21, George Georgakis <luv-sub@tripleg.net.au> wrote:
On 28-May-14 10:16 PM, Russell Coker wrote:
Has anyone had any experience with 10GigE?
Apart from Intel, Chelsio are another well-respected 10GigE name. They are well-supported under Linux and have coped well with saturation testing at my old employer's R&D benchmarks (capable of sustained and stable 1300Gbps throughput, way beyond what most people need).
I haven't worked with them for a few years so my memory of their specs is now a little hazy, have a gander through www.chelsio.com, They aren't cheap, but high-end quality rarely is.
-- Email to luv-sub@tripleg.net.au will bounce. Email to george at the same domain will accept.
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
participants (6)
-
Chris Samuel
-
duncan16v
-
George Georgakis
-
Mathew McBride
-
Rasika Amarasiri
-
Russell Coker