CommuniGate Pro Server performance test on IBM System z9 CommuniGate Pro SIP Performance Test on IBM System z9 Technical Summary Report Version V03 Version : 03 Status : final Updated : 16 March 2007 . PSSC IBM Customer Centre March 16, 2007 Page 1 of 14Montpellier CommuniGate Pro Server performance test on IBM System z9 1 Executive summary In this document, you‘ll find the results of tests run in the PSSC Montpellier in July – September, 2006. The request coming from IBM Brazil was to execute a performance test of the CommuniGate Pro Internet Communications Platform in the System z9 environment to meet a customer request for information on scaling Communigate Pro on System z. The main objectives of the performance test were: • To prove the z9 platform as a suitable platform for CommuniGate Pro in both “single-server” and “SIP Farm Dynamic Cluster” architectures. • To prove the capability of the z9 platform for Service and Telecommunications Providers, with an subscriber base of over 20 million accounts. • To validate the scalability of a VoIP service offering delivered on the CommuniGate Pro SIP Farm Dynamic Cluster running on System z Linux. • To demonstrate the administration, reliability, and scalability benefits and cost-effectiveness of running a CommuniGate Pro SIP Farm Dynamic Cluster on the z9 platform. During this performance test ...
CommuniGate Pro Server performance test on IBM System z9
Technical Summary Report Version V03
CommuniGate Pro SIP Performance Test on IBM System z9 .
Version : 03 Status : final Updated : 16 March 2007
PSSC IBM Customer Centre March 16, 2007 Montpellier
Page 1 of 14
CommuniGate Pro Server performance test on IBM System z9
1 Executive summary In this document, youll find the results of tests run in the PSSC Montpellier in July September, 2006. The request coming from IBM Brazil was to execute a performance test of the CommuniGate Pro Internet Communications Platform in the System z9 environment to meet a customer request for information on scaling Communigate Pro on System z. The main objectives of the performance test were: • To prove the z9 platform as a suitable platform for CommuniGate Pro in both “single-server and “SIP Farm Dynamic Cluster architectures. • To prove the capability of the z9 platform for Service and Telecommunications Providers, with an subscriber base of over 20 million accounts. • To validate the scalability of a VoIP service offering delivered on the CommuniGate Pro SIP Farm Dynamic Cluster running on System z Linux. • To demonstrate the administration, reliability, and scalability benefits and cost-effectiveness of running a CommuniGate Pro SIP Farm Dynamic Cluster on the z9 platform. During this performance test the following objectives were achieved: • The z9 platform proved to be a superb platform for the CommuniGate Pro Internet Communications Platform, in terms of overall performance as well as scalability. Ultimately, the testing demonstrated a subscriber base of 25 million accounts (all of which were enabled not only for SIP/RTP-based VoIP but also e-mail and calendaring) with 5 million accounts registered simultaneously. • The z9 platform demonstrated very good overall vertical scalability as the number of Linux virtual machines was increased to a tested maximum of 20 z/VM Linux virtual machines for the CommuniGate Pro SIP Farm Dynamic Cluster (with another 10 z/VM Linux virtual machines used for other performance test needs such as an NFS Server, DNS servers, and load-generators), with excellent CPU usage rates and near-linear performance increases as the number of virtual CPs was increased. • The service and clustering capabilities of CommuniGate Pro in conjunction with the reliability and efficiency of the z9 platform demonstrates a “model architecture that Service and Telco Providers should be strongly considering to satisfy both current VoIP subscriber levels and future growth of VoIP services.
PSSC IBM Customer Centre Montpellier
March 16, 2007
Page 2 of 14
CommuniGate Pro Server performance test on IBM System z9
2 Introduction CommuniGate Systems develops carrier class Internet Communications software for broadband and mobile service providers, enterprises and OEM partners worldwide. Their flagship product -the CommuniGate Pro Internet Communications Platform is a recognized leader in scalable e-mail messaging, collaboration, and multi-media over IP (VoIP), running on more than 30 major computer platforms. Its unsurpassed scalability and feature set have won more industry awards than any other software communications solution on the market. Session Initiation Protocol (SIP) is an application-layer control (signaling) protocol for creating, modifying, and terminating sessions with one or more participants. These sessions include Internet telephone calls, multimedia distribution, and multimedia conferences.
3 Performance test preparation 3.1 Technical infrastructure 3.1.1 Hardware • IBM System z9 109 S38 (S54 starting 10 th August): • 38 (54) processors • 128 (256) GB of memory • 8 x FICON channels (Ficon Express Adapter) • 2 x OSA Express Fast Ethernet Adapter • 2 x OSA Express2 10 Gigabit Ethernet • IBM TotalStorage DS8100 • IBM TotalStorage Enterprise Tape Drive 3590 tape drive for backup purpose 3.1.2 Software 3.1.2.1 z/VM Virtualization • Operating system: IBM z/VM 5.2 RSU 5202 (March 2006) 3.1.2.2 Operating System on Servers • Operating system: SUSE Linux Enterprise Server Version 9 Service Pack 3, 64bit 3.1.2.3 Performance monitoring & reporting tools • z/VM Performance Toolkit (optional component of z/VM) • SUSE Linux APPLDATA monitor (built-in component of SUSE Linux Enterprise Server) • RMFPMS for Linux: http://www-03.ibm.com/servers/eserver/zseries/zos/rmf/rmfhtmls/pmweb/pmlin.html
CommuniGate Pro Server performance test on IBM System z9 3.1.2.4 Network tools • BIND 9 for DNS routing • Linux Virtual Server (LVS) using the “direct routing method 3.1.2.5 Test tools • SIPp for SIP traffic generation 3.1.2.6 Software Under Test • CommuniGate Pro version 5.0.10 for 64-bit zSeries Linux (“s390x platform) 3.1.3 Network Two physical networks were set up: • Fast Ethernet for the administration network 10.3.48.x. This network was accessible remotely by teams of CommuniGate Systems and IBM Brazil with a public IP address through. • 10 Gigabit Ethernet for the injection network (direct cable connection) 192.168.1.x 1 virtual network was set up: • HIPERSOCKETS network 192.168.2.x. This network was used for communication between Linux server for some of the runs Hipersocket network is a virtual TCPIP network defined on the microcode level of the System z machine. It enables network communication between virtual machines running in different LPARs or under z/VM. Up to 16 Hipersocket networks could be defined in one machine.
EMEA ATS PSSC
March 20, 2007
Page 4 of 14
CommuniGate Pro Server performance test on IBM System z9 The Hipersocket network is a particularly efficient mechanism for “intra-cluster requirements within the CommuniGate Pro Dynamic Cluster. The Dynamic Cluster architecture behaves in some ways like a grid system, with intra-cluster exchanges between cluster nodes for load distribution, traffic allocation, call and PBX bridging, and application-level redundancy. Within the Dynamic Cluster, all nodes are active, and all accounts are serviced on all nodes in the cluster. There is no segmentation or division of the subscriber base, and one of the ramifications of this great simplicity are intra-cluster communications which benefit greatly from the very low latency and excellent networkin erformance of the z9. Europe ATS - PSSC Benchmark Architecture z9 EC DS 8100 Ficon Virtual LAN channels to DS 8100 192.168.2.0
CommuniGate Pro Server performance test on IBM System z9
“rolling upgrades), the traffic is redistributed to other SIP Farm members to maintain consistent signaling. The following diagram demonstrates a 8+4 Dynamic Cluster [8 Frontends, 4 Backends], architected in a simple layout where all Frontends are part of the SIP Farm, and therefore all Frontends use the same confi uration and rovide all services to all domains and accounts:
EMEA ATS PSSC
March 20, 2007
Page 6 of 14
CommuniGate Pro Server performance test on IBM System z9 3.1.4.2 Logical Architecture The following diagram illustrates the logical architecture of the CommuniGate Pro SIP Farm Dynamic Cluster - as well as the storage layout, load balancer, and load generation systems used in the CommuniGate Pro z9 platform testing. All Linux instances used in the testing were created on z/VM on S stem z.
3.2 Environment setup 3.2.1 DS8100 configuration 3.2.1.1 Physical Layout The DS8100 disk storage system was configured with 4 Logical Control Units (LCU) connected using 8 FICON channels to the z9 machine. Each of these LCUs emulated 80 x 3390 model 3 DASDs, 320 devices with these device addresses: - 7200 724F - 7300 734F - 7400 744F - 7500 754F
EMEA ATS PSSC
March 20, 2007
Page 7 of 14
CommuniGate Pro Server performance test on IBM System z9 3.2.1.2 Disks allocated to z/VM and Linux virtual machines were evenly distributed among all LCUs. Logical Layout The required “Shared File System (as discussedpreviously in the section “Clustering Architecture) for this testing consisted of a single 255 GB Linux LVM Logical Volume built from 116 3390 model 3 DASDs, attached to srv20 via FICON Channels as full-pack minidisks: srv20# for i in $(cat DASD); do fdasd -a /dev/${i}; done | tee log 2>&1 srv20# for i in $(cat DASD); do pvcreate /dev/${i}1; done | tee log 2>&1 srv20# vgcreate -p 256 -s 32m datavg $(cat DASD) On this Logical Volume, an XFS filesystem was created with a block-size of 4kB: srv20# mkfs.xfs -f -s size=4096 /dev/datavg/datalv1 Once active with the CommuniGate Pro Dynamic Cluster, and after all 25 million accounts and account meta-data (e.g., password, real-name) had been enabled on the cluster, this primary data volume “Shared File System had approximately the following characteristics (the volume usage varies throughout the testing, but not significantly due to the profile of these tests): 1K-blocks Used Available Use% Mounted on 277348288 229616928 47731360 83% /cgp 3.2.2 System z9 configuration Three logical partitions (LPARs) were defined on the z9 machine out of which two were used during the tests. 3.2.2.1 Logical Partition 1 (LPAR1) • Number of CPs (Central processors): 12 (24 with S54 model) CPs were dedicated to the LPAR1 • Central Storage: 96 GB Central Storage dedicated to the LPAR1 • Expanded Storage: 32 GB Expanded Storage dedicated to the LPAR1 (in order to improve the performance of the z/VM paging activity) • Operating system: z/VM 5.2 RSU 5202
EMEA ATS PSSC
March 20, 2007
Page 8 of 14
CommuniGate Pro Server performance test on IBM System z9
3.2.2.2 Logical Partition 2 (LPAR2) • Number of CPs (Central processors): 6 (24 with S54 model) CPs were dedicated to the LPAR2 • Central Storage: 48 GB Central Storage dedicated to the LPAR2 • Expanded Storage: 16 GB Expanded Storage dedicated to the LPAR2 (in order to improve the performance of the z/VM paging activity) • Operating system: z/VM 5.2 RSU 5202
EMEA ATS PSSC
March 20, 2007
Page 9 of 14
CommuniGate Pro Server performance test on IBM System z9
4 Test Scenarios and Summary Results 4.1 Table of Scenarios and Results
No
No No No
No 6568.4
Yes 175.8 Yes Not tested (planned but not completed due to time) Yes 394.0
Calls per second (as Load measured by Balancer NAS Server REGISTER/ sipp, 60-Test CommuniGate Pro used? used? auth used? second call Title Test Summar Architecture Yes/No Yes/No Yes/No duration R1 CommuniGate Pro single- 1 CommuniGate Pro No No No 781.6 server test, SIP proxying Server performing all onl functions. R2 CommuniGate Pro 1 Frontend (handles No No No 712.9 Cluster with one Frontend all SIP transactions), and one Backend, SIP 1 Backend (performs proxying only registration functions, AOR lookup, and cluster management.) R3 CommuniGate Pro 15 SIP Farm Yes Cluster with multiple Frontends and 1 Frontends and one Backend. Backend, SIP proxying only R4 CommuniGate Pro single- 1 CommuniGate Pro No server, SIP calling with Server performing all registered accounts functions. R5 CommuniGate Pro 1 Frontend, 1 No Cluster with one Frontend Backend. and one Backend, SIP calling with registered accounts R6 CommuniGate Pro 15 SIP Farm Yes Cluster with multiple Frontends and 1 Frontend and one Backend. Backend, SIP calling with registered accounts R7 Full SIP Farm Dynamic 13 SIP Farm Yes Cluster, multiple Frontends, 7 Frontends and Backends Backends. 4.2 Run Profile of Test R3 Test R3 is notable for its very high performance rates. As described in the above table, this particular test measures “SIP proxy performance, and is not limited by the significantly higher I/O required for SIP REGISTER authentication and inbound call lookup by AOR (e.g., <sip: test1@example.lan >) in order to relay to the registered contact address (for example, Contact:<sip:test1@192.168.1.111:5060>). 4.2.1 Run summary During this run, 17 Linux machines were involved: • srv1 srv15 acting as Frontends • srv20 acting as Backend • srv21 acting as Load Balancer
EMEA ATS PSSC March 20, 2007
Yes
Yes 1049.6
Page 10 of 14
CommuniGate Pro Se rver performance test on IBM System z9 6 sipp UAS processes were launched on sipp1, and 6 sipp UAC processes were launched on sipp10 machines. The call rate over 1 hour achieved 6,568.4 new SIP calls/second throughput, with 387,536 concurrent active calls in average. 4.2.2 Run summary table Test Title R3 (R3210801) Total Time of Test mm:ss 64:50 z/VM Linux machines used for SIP 10 SIPp UAC processes 35 SIP UAS rocesses 35 Total Calls 32,816,231 Calls er second 6568.4 Failed Calls 0 % of Retransmissions of all ackets 0.108% SIP UAC Avera e Res onse Time 0.006 seconds 6 ms Total CPU Usage in LPAR1 1930.47% (19.3047 CPU out of 24) 4.2.3 Resource consumption The average CPU utilization was 1930.47%. 2,500 2,000 1,500 1,000 500 0