The AMC331 is a single-width, mid-height AdvancedMCTM (AMC) based on the AMC.1 specification. The AMC331 provides dual port InfiniBand QDR Host Channel Adapter (HCA). Each port is selectable to run at 10, 20, or 40Gb/s Infiniband. The module utilized the Mellanox ConnectX -2 VPI chip.
Mating Connector for LVDS Custom-Load Boards Connector compatible with NI SHB12X-B12X LVDS cable Build custom-load boards for LVDS test and measurement 12X InfiniBand connector Right-angled, surface-mount PCB mounting
IBTracer™ 4X, The world's first 4X analyzer for the InfiniBand™ Architecture, LeCroy's new analyzer dramatically shortens engineering development cycles and reduces the costs of developing InfiniBand-based semiconductors, switches, routers, and software. The 4X InfiniBand protocol extends the existing 1X protocol by supporting up to four 2.5Gb/sec dual-simplex connections for an effective duplex transmission speed of 10Gb/sec.
The ART132 is an I/O expansion ATCA Rear Transition Module (ARTM) that provides PCIe, GbE, InfiniBand, USB and Management I/O for the front blade. The module is designed to mate with Emerson front blades such as ATCA-7360 and ATCA-7365.
You want to save time by measuring sensitivity on all four QSFP+ channels at the same time ? Here we go: BERT + E/O converter + optical attenuator + optical power meter. Have this four times in two mainframes. Only one clock source is needed.Covers 16G FibreChannel and Infiniband FDR.
The SMART systems architecture is developed around open, standard technologies such as Linux, InfiniBand™, MPI, VSIPL and Corba. Through the support of well-know development tools and libraries, customers can reduce development time and shorten time-to-deployment. This approach also provides customers the flexibility to operate within a heterogeneous computing environment.
The NC1000 Series amplified noise modules produce AWGN as high as +13 dBm, and have bandwidths up to 10 GHz. The high power modules are designed to test noise immunity for Cable TV equipment, secure communication channels, and military jamming systems. The lower power modules, <= 0 dBm, are random jitter sources for many applications including, PCIexpress, Infiniband, and 10 GigE.
The explosive adoption of high speed serial data links and the proliferation of multi-lane SerDes channels have created a new set of challenges for semiconductor design and test teams. These multi-Gbps low voltage differential signaling (LVDS) channels are proliferating in many standard forms, including PCI express, Gbit Ethernet, Serial ATA, RapidIO, Fiber Channel and Infiniband to name just a few.
The 40Gbps QSFP+ transceiver is well suited for Infiniband and 40GBASE-SR4 / 40GBASE-LR4 applications. It combines the higher density attractions of parallel modules with some of the key advantages normally associated with SFP+ based modules. It is intended for use short reach applications in switches, routers and data center equipment where it provides higher density and lower cost when compared with standard SFP+ modules.
dataMate® developed SFP and SFP+ Copper Loopbacks as a means to test links in networks or in other devices. Our copper loopbacks can be used in both copper and optical ports, offering the user a more flexible, robust and economical solution. They are designed for high-speed data rates of up to 10 Gbps to support Fibre Channel, InfiniBand and Ethernet.
The DM-338-0 is a loopback module in a CXP form factor. It provides 12 pairs of transmit data channels connected to the corresponding receive channels. These data channels can operate at transmission speeds up to 10Gbps. The DM-338-0 is compliant with the CXP Specification (Infiniband Architecture Annex 6 and SFF-8642).
The FMC108 is an FPGA Mezzanine Module per VITA 57 specification. The FMC108 has two QSFP+ cages which allows for dual 10GbE/40GbE/SRIO/PCIE/40Gb InfiniBand and Aurora to be routed to appropriate FMC pins.The FMC108 has dual re-driver on baord to allow long copper cables for the QSFP+ vs. Fiber to reduce total system cost.
The test system cluster architecture is based on dual CPU or 4-CPU PCs acting as cluster nodes. The nodes communicate and synchronise over a high-speed network (Myrinet or InfiniBand). A modification of the Linux operating system allows to run the test execution and evaluation algorithms in hard real-time on reserved CPUs, where scheduling is non-preemptive and controlled by the test system itself. The interrupts caused by interfaces to the system under test may be relayed to CPUs designated exp...