|Sponsor Your Catalog|
AMC331 - AMC Dual-Port QDR InfiniBand The AMC331 is a single-width, mid-height AdvancedMCTM (AMC) based on the AMC.1 specification. The AMC331 provides dual port InfiniBand QDR Host Channel Adapter (HCA). Each port is selectable to run at 10, 20, or 40Gb/s Infiniband. The module utilized the Mellanox ConnectX -2 VPI chip.
AMC330 - Dual-Port InfiniBand The AMC330 is a single-width, mid-height AdvancedMC� (AMC) based on the AMC.1 specification. The AMC330 provides Dual 4x InfiniBand at 10 or 20Gb/s per port.
Right-Angle InfiniBand Connector Mating Connector for LVDS Custom-Load Boards Connector compatible with NI SHB12X-B12X LVDS cable Build custom-load boards for LVDS test and measurement 12X InfiniBand connector Right-angled, surface-mount PCB mounting
IBTracer 4X - 4X InfiniBand Protocol Analyzer IBTracer™ 4X, The world's first 4X analyzer for the InfiniBand™ Architecture, LeCroy's new analyzer dramatically shortens engineering development cycles and reduces the costs of developing InfiniBand-based semiconductors, switches, routers, and software. The 4X InfiniBand protocol extends the existing 1X protocol by supporting up to four 2.5Gb/sec dual-simplex connections for an effective duplex transmission speed of 10Gb/sec.
FMC107 - FMC Dual QSFP+ for 10GbE/40GbE/SRIO/PCIE/40Gb InfiniBand/AURORA The FMC107 is an FPGA Mezzanine Module per VITA 57 specification. The FMC107 has two QSFP+ cages which allows for dual 10GbE/SRIO/PCIE and Aurora to be routed to appropriate FMC pins.
ART132 - ATCA Rear Transition Module for Emerson Blades The ART132 is an I/O expansion ATCA Rear Transition Module (ARTM) that provides PCIe, GbE, InfiniBand, USB and Management I/O for the front blade. The module is designed to mate with Emerson front blades such as ATCA-7360 and ATCA-7365.
QSFP+ - Transceiver The 40Gbps QSFP+ transceiver is well suited for Infiniband and 40GBASE-SR4 / 40GBASE-LR4 applications. It combines the higher density attractions of parallel modules with some of the key advantages normally associated with SFP+ based modules. It is intended for use short reach applications in switches, routers and data center equipment where it provides higher density and lower cost when compared with standard SFP+ modules.
DM-338-0 - Loopback CXP The DM-338-0 is a loopback module in a CXP form factor. It provides 12 pairs of transmit data channels connected to the corresponding receive channels. These data channels can operate at transmission speeds up to 10Gbps. The DM-338-0 is compliant with the CXP Specification (Infiniband Architecture Annex 6 and SFF-8642).
SMART SystemS The SMART systems architecture is developed around open, standard technologies such as Linux, InfiniBand™, MPI, VSIPL and Corba. Through the support of well-know development tools and libraries, customers can reduce development time and shorten time-to-deployment. This approach also provides customers the flexibility to operate within a heterogeneous computing environment.
QSFP+tESTER Rx - Multi Channel Bit Error Rate Tester You want to save time by measuring sensitivity on all four QSFP+ channels at the same time ? Here we go: BERT + E/O converter + optical attenuator + optical power meter. Have this four times in two mainframes. Only one clock source is needed. Covers 16G FibreChannel and Infiniband FDR.
dataMate® - Loopback SFP & SFP+ dataMate® developed SFP and SFP+ Copper Loopbacks as a means to test links in networks or in other devices. Our copper loopbacks can be used in both copper and optical ports, offering the user a more flexible, robust and economical solution. They are designed for high-speed data rates of up to 10 Gbps to support Fibre Channel, InfiniBand and Ethernet.
ETSerdes - Embedded SerDes Test The explosive adoption of high speed serial data links and the proliferation of multi-lane SerDes channels have created a new set of challenges for semiconductor design and test teams. These multi-Gbps low voltage differential signaling (LVDS) channels are proliferating in many standard forms, including PCI express, Gbit Ethernet, Serial ATA, RapidIO, Fiber Channel and Infiniband to name just a few.
B040 - 4.25 Gb/s BERT # 10 standards multi-rate BERT between 125Mb/s and 4.25Gb/s, # OC-3/12/48, Fast Ethernet/Gigabit, 1/2/4 Fiber Channel, InfiniBand, ESCON # PRBS 27-1, 223-1 and 231-1., K28.5 pattern or clock generation # Error detection with internal integrated CDR # Trigger: data rate divided by n (n = 2, 4, 8, 16, 32)
NC1000 Series - Amplified Noise Modules The NC1000 Series amplified noise modules produce AWGN as high as +13 dBm, and have bandwidths up to 10 GHz. The high power modules are designed to test noise immunity for Cable TV equipment, secure communication channels, and military jamming systems. The lower power modules, <= 0 dBm, are random jitter sources for many applications including, PCIexpress, Infiniband, and 10 GigE.
FMC108 - FMC Dual QSFP+ with re-driver The FMC108 is an FPGA Mezzanine Module per VITA 57 specification. The FMC108 has two QSFP+ cages which allows for dual 10GbE/40GbE/SRIO/PCIE/40Gb InfiniBand and Aurora to be routed to appropriate FMC pins.The FMC108 has dual re-driver on baord to allow long copper cables for the QSFP+ vs. Fiber to reduce total system cost.
Test Engines The test system cluster architecture is based on dual CPU or 4-CPU PCs acting as cluster nodes. The nodes communicate and synchronise over a high-speed network (Myrinet or InfiniBand). A modification of the Linux operating system allows to run the test execution and evaluation algorithms in hard real-time on reserved CPUs, where scheduling is non-preemptive and controlled by the test system itself. The interrupts caused by interfaces to the system under test may be relayed to CPUs designated explicitly for their handling. This approach offers the opportunity to utilise high-performance standard hardware and the services provided by the widely accepted Linux operating system in combination with all mechanisms required for hard real-time computing. The cluster architecture presents an opportunity to distribute interfaces with high data throughput on different nodes, so that PCI bus overload can be avoided. In addition, the CPU load can be balanced by allocating test data generators, environment simulations and checkers for the behaviour of the system under test (“test oracles”) on dedicated CPUs.
Results 1 - 16 of 16
InfiniBand - Highly scalable single fabric I/O standard with 20Gb/s host connectivity and 60Gb/s switch to switch links.
- Adapters (1)
- AdvancedTCA (1)
- AMC (2)
- Analyzers (1)
- Attenuators (1)
- Blades (1)
- Boards (1)
- Checkers (1)
- Connectors (1)
- Converters (1)
- CPU (1)
- Development (2)
- Instrument Control Buses (6)
- Loads (2)
- LVDS (2)
- Mezzanines (2)
- Modules (7)
- PCI Test (2)
- Sources (2)
- Switches (2)
- »See all 36 Categories