Categories

CUC (6) CUCM (26) CUPS (3) Jabber (6) Routing (2) Solarwinds Orion NPM (4) switching (1) UC (2) Video (6) voice (2)

Wednesday, 7 June 2017

Play queue announcements and recordings using Unity Connection

I haven't posted anything on Unity Connection for a while. What has happened so far?  Another joker's in the White House, crude has gone up and London is in a shit state of affairs.  Back to unity connection.  This post is about how to play a recorded message before a call gets handed over to a queue, hunt group or reception console.

Scenarios

For example, let's say you have a queue for an IT service desk and in the event of a large outage you want to play out a message notifying users of a large scale outage "we are currently suffering issues with users connecting to the internet in the Kansas City area and are working to resolve this", before the call is handed over to the queues. Or maybe you want to play out a welcome message before handing a call over to a reception console "welcome to Dead meat Inc. All our meat is guaranteed 100% dead before it lands on your plate". anyway the scenarios are endless, but you can see what I am getting at.

Nuts and bolts

These pre-recorded messages,  are nothing more than recorded greetings in a call handler, it is that simple. So let us get started.

1. Create the call handler CTI route points. For this particular exercise, you need two CTI route points.I will explain later, why you need two and not one.  So the way to do this is to add a CTI RP in CUCM and do a call forward all to your voicemail pilot point.For this example I will use 900617 and 18 for the two call handlers.
2. Add the first call handler, I have called it IT outage notifications and assigned it extension 900617:

Fig 1



3. Go to CUCX and add the first forwarding rule to point the first CTI RP to the correct Call handler, point it to the call handler called "IT outage notification", made in the previous step:

  1. Fig. 2 

Make sure you set it to "go directly to greeting" (see above). This will make the call hitting the handler, go straight to the recorded message/greeting.

Also add the call forwarding condition:

Fig.3

(forwarding station=900617 which is the extension of the CTI RP).

At this stage you should be able to dial into 900617 and the Standard greeting should be played. So test this first before you proceed. If you get an announcement along the lines of "from a touch tone telephone dial any extension...blah blah", your call is not getting through to your call handler and you might be missing your forwarding rule.

4. Set the greetings in your call handler.

This is where you are actually going to define what message will be played when someone calls into your queue and what happens to the caller once the message has been played.  Go back to your call handler, called IT outage Notification (or whatever you have called it) and go to Greetings. 
Let's use the scenario where you want to use a welcome meeting all the time and then go to reception.  So this means, in your 900618 call handler, the standard greeting would need to always play.  Below is a picture of what this welcome message/standard greeting, needs to look like:

Fig. 4 

So you will need to record the standard greeting and personalize it. Don't allow caller input during the meetings and after the greeting send it to a second call handler (IT outage notification after greeting handler) and attempt transfer. I mentioned in the beginning that we needed two call handlers to play a recording before transferring the call and the reason for this is greeting "transfer rules".

There is some contradicting information on what is applied first, the playing of greetings or the application of transfer rules. And to be honest, I have had both set up work, but I prefer to use a second call handler that does not play any greetings, but only attempts the transfer to the reception or queue or wherever it needs to transfer to.  So below is a screenshot of my transfer rules for the first call handler, and is actually a combination of direct transfers and sending the call to a second call handler for transfer.

Fig. 5


The standard greeting on transfers to 33570, once the standard greeting has been played (see figure 4) , this could be your reception.  In the example above (Fig 5) the alternate greeting is sent to 900618, the second call handler.

5. set up a second call handler with a transfer rule

Now set up a second call handler in the same way as your first call handler that contains the welcome message/standard greeting

This second call handler invokes the standard greeting when receiving a call:

Fig. 6




The standard greeting of the second call handler plays nothing (by setting Callers hear Nothing, see fig. 6), it just invokes the standard greeting transfer rule, and transfer the call to 33670, as per Fig. 7


Fig. 7

if you wanted to record the greetings, it is probably easiest to use the greeting administrator, unless moving around wav files is your cuppa tea. The greeting administrator also allows you to easily turn on and off an alternative greeting on one of your call handlers. You can turn the alternate greeting on and off in case you have an extraordinary notification you want to play to your callers, before connecting the call to the queue.

I have done a separate post on how to set up greeting administrator in a separate post.

Namaste!


Sunday, 4 June 2017

Basic QoS verification



Most people would happily apply auto QoS on their Cisco kit and assume that somehow this will guarantee QoS is set up properly like a true self fulfilling prophecy. Of course, there is no such thing. I would argue that that QoS configuration is not an out of the box feature that you just turn on. QoS requires tweaking and a thorough understanding of the quality and quantity of the network traffic in your organisation. When you set it up for the first time, you will most likely not get it 100% right. Maybe you will never get any complaint about the performance of your organisation's applications, simply because you have an obscene amount of bandwidth, maybe there never is any contention on your WAN links, which would make you a pretty happy engineer. For all those, not working for a bank or an insurance company; your WAN links will be just enough to carry the business or branch's traffic, so a properly functioning QoS configuration is important. All the voice engineers reading this article, will tell you that degraded audio and video will be reported straight away by your users. This sort of degradation is almost always symptomatic. I.e. if your QoS is not set up properly your video and audio is the first to suffer.


Cisco uses its MQC (Modular Quality of service Command line interface), to implement QoS on its devices.  Yes its modular, but this is still not a mean feat to configure and verify it.  In this post I will be trying to break down the basics of MQC in an attempt to give it some structure in the way that you can verify its workings. 

MQC is essentially a way to achieve the following:

When traffic enters a router/layer3 switch or any device that needs to police traffic, it needs a way to break up the traffic and decides what priority to give it, access lists can typically do that in combination with DSCP values that have already been set (Phones and telepresence endpoint use af41 for video and ef for audio only, so you you will need to do is trust these values, but putting mls qos trust dscp statements on your access ports). All this is done on the ingress port and essentially you have now 'labelled' (through DSCP) all your interesting traffic. Everything that has not been explicitly labelled will be considered default traffic and will be the first type of traffic to be dropped if there is contention on your egress interface. So on the egress interface you will need to create class maps that match certain DSCP values and give that traffic a certain bandwidth (%). The give it a shape average containing the maximum available bandwidth, this is called a service policy. Finally this service policy is applied to your egress interfaces.

MQC can be roughly broken up into 4 distinct parts of configuration:

  1. Classify traffic using access lists.
  2. Mark traffic in accordance with access lists and or DSCP trust markings
  3. Prioritize and assign bandwidth to each class to create a service policy
  4. Apply the service policy to the interface

Verification commands:

Are my policies applied to the relevant interfaces?
Issue the following command, to find out what service policies are applied to what interfaces, keep in mind that the direction of traffic decides if the service policy is applied at all.

router#show policy-map interface brief
Service-policy output: pm-shape-queue-out
 GigabitEthernet0/0 
Service-policy input: pm-classify-in
 GigabitEthernet0/1.2 
 GigabitEthernet0/1.3 
 GigabitEthernet0/1.5 
 GigabitEthernet0/1.6 
 GigabitEthernet0/1.7 
 GigabitEthernet0/1.8 
 GigabitEthernet0/1.9 
 GigabitEthernet0/1.10 
 GigabitEthernet0/1.11 
 GigabitEthernet0/1.100 
router#


In the example above the service policy called "pm-classify-in" is applied to the router on the ingress ports on Gi0/1, this is where traffic gets marked and classed. On the egress interface Gi0/0 the traffic gets policed and queued/dropped if necessary, using "pm-shape-queue-out".

Is traffic getting dropped?
Once you have established what policy map is applied to what interface you can see its service policy. Look out for the allocated bandwidth for a particular class map and verify offered rate, drop rate and queue drops, these are typically an indication that traffic is being policed and prioritized:

router#show policy-map interface Gi0/0
 GigabitEthernet0/0 

  Service-policy output: pm-shape-queue-out

    Class-map: class-default (match-any)  
      561242536 packets, 233371823734 bytes
      5 minute offered rate 463000 bps, drop rate 0000 bps
      Match: any 
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/106307/0
      (pkts output/bytes output) 561136228/233229485989
      shape (average) cir 10000000, bc 40000, be 40000
      target shape rate 10000000

      Service-policy : pm-queue-mark-out

        queue stats for all priority classes:
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 69567362/32277474836

        Class-map: cm-prec-4-5-out (match-any)  
          69567365 packets, 32277476457 bytes
          5 minute offered rate 228000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
            104148 packets, 11066856 bytes
            5 minute rate 0 bps
          Match:  dscp af41 (34)
            69463217 packets, 32266409601 bytes
            5 minute rate 228000 bps
          Priority: 33% (3300 kbps), burst bytes 82500, b/w exceed drops: 0   (notice the PRIORITY: 33% which indicates Low latency queueing applies)
          

        Class-map: cm-prec-3-out (match-any)  
          28990699 packets, 7045511506 bytes
          5 minute offered rate 10000 bps, drop rate 0000 bps
          Match: ip precedence 3 
            28990699 packets, 7045511506 bytes
            5 minute rate 10000 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/830/0
          (pkts output/bytes output) 28989869/7045236932
          bandwidth 5% (500 kbps)

        Class-map: cm-prec-2-out (match-any)  
          168340364 packets, 88941217174 bytes
          5 minute offered rate 85000 bps, drop rate 0000 bps
          Match: ip precedence 2 
            168340364 packets, 88941217174 bytes
            5 minute rate 85000 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/31192/0
          (pkts output/bytes output) 168309172/88898900167
          bandwidth 27% (2700 kbps)

        Class-map: cm-prec-1-out (match-any)  
          1546262 packets, 2100799191 bytes
          5 minute offered rate 5000 bps, drop rate 0000 bps
          Match: ip precedence 1 
            1546262 packets, 2100799191 bytes
            5 minute rate 5000 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/286/0
          (pkts output/bytes output) 1545976/2100399647
          bandwidth 5% (500 kbps)

        Class-map: class-default (match-any)  
          292797848 packets, 103006819352 bytes
          5 minute offered rate 112000 bps, drop rate 0000 bps
          Match: any 
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops/flowdrops) 0/73999/0/73999
          (pkts output/bytes output) 292723849/102907474407
          Fair-queue: per-flow queue limit 16 packets

Am I marking my traffic correctly?

Well, this is a bit harder to answer and you might need get wireshark out of your tool box for this and verify is certain interesting traffic has the correct DSCP vlaue once it is received by the far end (use a span port for this for instance).  You could start with looking at your ACLs that define your interesting traffic and see if their statements are getting hit. For example, consider the following access-list, defining traffic for IP precedence 1:

ip access-list extended acl-prec-1
 remark Bulk Traffic
 permit ip any any precedence priority
 permit tcp any any eq 143

 permit tcp any any eq 993


router#      sh ip access-list acl-prec-1         
Extended IP access list acl-prec-1
    10 permit ip any any precedence priority (30831037 matches)
    20 permit tcp any any eq 143 (304 matches)

    30 permit tcp any any eq 993 (52018 matches)



As you can see all the statements in ACL "acl-prec-1", have matches. If you are not seeing any matches on a certain statement, you might need to double check things like IP addresses and ports and change the ACL until it is getting matched.

Another example is an ACL that matches all packets that have DSCP value ef (IP Precedence critical):

ip access-list extended acl-prec-5

 permit ip any any precedence critical

router# sh ip access-list acl-prec-5
Extended IP access list acl-prec-5

    10 permit ip any any precedence critical (1856824595 matches)



This ACL is based on DSCP values being set by the phones and or Telepresence endpoint (that would probably set DSCP to af41), again using the  "mls qos trust DSCP" command on the access port where these endpoint are connected to.



Tuesday, 30 May 2017

Test your links using iperf3

Recently I was looking for some tooling that could test QoS policy maps. So what I wanted to do is max out a link that has QoS applied towards a WAN provider. 


Iperf is essentially a tool that operates in a client/server fashion. So, if you want to  test a certain link in terms of performance, you would need to have iperf running on a machine on either side of the link; one as server, one as client.

By default it generates traffic, initiated on the client to the server, by default on destination port 5201 (UDP or TCP), so if you have a firewall in the path, make sure you open it for the default 5201 towards the iperf server.

Let us look at the full syntax and options first and then discuss a few examples and applications.

Simply run iperf3.exe <enter> from the DOS prompt, this will give you the out put below:


 Server or Client:
  -p, --port      #                       port to listen on/connect to
  -f, --format    [kmgKMG]        format to report: Kbits, Mbits, KBytes, MBytes
  -i, --interval  #                      seconds between periodic bandwidth reports
  -F, --file name                      xmit/recv the specified file
  -B, --bind      <host>            bind to a specific interface
  -V, --verbose                         more detailed output
  -J, --json                               output in JSON format
  --logfile f                              send output to a log file
  -d, --debug                            emit debugging output
  -v, --version                          show version information and quit
  -h, --help                               show this message and quit
Server specific:
  -s, --server                             run in server mode
  -D, --daemon                         run the server as a daemon
  -I, --pidfile file                      write PID file
  -1, --one-off                           handle one client connection then exit
Client specific:
  -c, --client    <host>              run in client mode, connecting to <host>
  -u, --udp                                use UDP rather than TCP
  -b, --bandwidth #[KMG][/#] target bandwidth in bits/sec (0 for unlimited)
                                                (default 1 Mbit/sec for UDP, unlimited for TCP)
                                                (optional slash and packet count for burst mode)
  -t, --time      #                         time in seconds to transmit for (default 10 secs)
  -n, --bytes     #[KMG]           number of bytes to transmit (instead of -t)
  -k, --blockcount #[KMG]      number of blocks (packets) to transmit (instead of -t or -n)
  -l, --len       #[KMG]              length of buffer to read or write
                                                (default 128 KB for TCP, 8 KB for UDP)
  --cport         <port>               bind to a specific client port (TCP and UDP, default: ephemeral port)
  -P, --parallel  #                      number of parallel client streams to run
  -R, --reverse                          run in reverse mode (server sends, client receives)
  -w, --window    #[KMG]      set window size / socket buffer size
  -M, --set-mss   #                   set TCP/SCTP maximum segment size (MTU - 40 bytes)
  -N, --no-delay                      set TCP/SCTP no delay, disabling Nagle's Algorithm
  -4, --version4                       only use IPv4
  -6, --version6                       only use IPv6
  -S, --tos N                            set the IP 'type of service'
  -Z, --zerocopy                       use a 'zero copy' method of sending data
  -O, --omit N                          omit the first n seconds
  -T, --title str                           prefix every output line with this string
  --get-server-output               get results from server
  --udp-counters-64bit             use 64-bit counters in UDP test packets

[KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga-

iperf3 homepage at: http://software.es.net/iperf/
Report bugs to:     https://github.com/esnet/iperf


So, because iperf uses a client/server relationship, you will need to copy the iperf3.exe file, to the destination machine (most likely on the far end of the link you are testing), and start it up as the iperf server:

iperf3.exe -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

This now means you can start generating traffic to the IP address to the iperf server. By default this traffic will be generated, using TCP port 5201.

Now that you have the iperf server running, generate some traffic towards it:

iperf3.exe -c 10.1.1.1 -t 60

This will generate TCP traffic for 60 seconds to 10.1.1.1 on port 5201, where 10.1.1.1 is the IP address of the iperf server.

So how could can you apply all this?


If you wanted to swap a link for instance to see what its maximum sustainable bandwidth is, or just to see if your provider is delivery what is set out to. Or if you want to swamp your link and push additional high priority traffic across (such as video and voice) to see if your real time media remain of good quality/are protected by QoS policies; run iperf with multiple parallel stream for, say 300 seconds, or at least a sufficient time frame to test and check.

Run:

iperf3.exe -c 10.1.1.1 -P4 - t 300


This will run 4 unlimited bandwidth streams  (P 4) for a period of 300 seconds to the iperf3 server at 10.1.1.1

QoS


With iperf3 it should be possible to generate traffic with a certain ToS (DSCP) value, this is done by useing the -S parameters so for instance:


iperf3.exe -c 10.1.1.1 -P4 - t 300 -S 184   (where 184 is DSCP/PHB ef)

Having said that, I have not been able to get this to work on win10 machine, so anyone who has' drop me a line. 


What you could do to test QoS is to swamp your link and run a video and or audio call across the same link at the same time, meanwhile checking jitter and media quality.

Oh and you might want to use wireshark as an additional tool so you can verify iperf's working. Try the following link to make DSCP settings more visible in wireshark:


https://www.mikrotik-routeros.com/2014/04/enabling-dscp-tos-display-column-in-wireshark/


All in all I think Iperf is quite a useful tool, and it comes for free, so why not use it.

Tuesday, 9 May 2017

VCS traversal zone configuration

Thought i d throw together a post on how to configure a VCS traversal zone and explain what it does.

So what is a VCS traversal zone?  


A VCS traversal zone is pretty much a connection between a VCS control (VCS-c) and a VCS expressway (VCS-e). where the VCS-c is located on the inside (trusted) part of your network and the VCS-e is situated in your DMZ. The VCS-e is pretty much the point of entry into your network, for external users to connect or make calls to your company's endpoints, for Business to Business calls, using URI dialing. The word "traversal" is a term used to indicate that the connection between the two VCS's traverses a firewall.

A traversal zone is also a requirement if you want to deploy Mobile and Remote Jabber (MRA).  

Required configuration

First of all, this post will discuss SIP only, because who in the right frame of mind would use H323, right!? 

First, start by opening the required ports on your Firewall to allow the required communication between the two VCS's. Cisco has an excellent document that describes this in detail:


In order for a VCS-c (traversal client) and a VCS-e (traversal server, to communicate by means of a traversal zone, the VCS-e listens to TCP port 7001 and pretty much sits and waits for the VCS-c to talk to it on that port. 9 out of 10 times this means zero configuration on the firewall because the traffic is initiated on the inside on the VCS-c and the VCS-e just responds.

Certificates for TLS

It is important that you get this step right. A lot of issues with traversal zones not establishing and calls not working across them are caused by incorrect certs or certs not being uploaded completely or properly. So create the following certs"

VCS-c:  internal cert (Cert A), signed by your internal CA (cert B, CA self signed cert)

VCS-e: external cert (Cert C, signed by the likes of Rapid SSL, containing the FQDN of the VCS-e server), and all CA's and intermediate CA's (certs D).

  1. Upload certs B and D onto your VCS-e cert store (Maintenance > Security Certificate > Trusted CA certificate ) . Cert C, the public CA signed cert for the VCS-e gets uploaded into: Maintenance > Security > server certificate > upload server certificate
  2. Upload certs D and B onto your VCS-c cert store (Maintenance > Security Certificate > Trusted CA certificate ). Cert C, the internal CA signed cert for the VCS-c gets uploaded into: Maintenance > Security > server certificate > upload server certificate
I can't stress this enough, take your time doing this, as not doing it can cause a lot of post implementation troubleshooting time if it fails. (believe me I've been there). if you do run into issues, use the client certificate testing tool under Maintenance > Security Certificates  > Client certificate testing tool

So now back to the actual config of the traversal zone, the previous steps were just prep work.


Configure VCS-e
I prefer to configure the VCS-e (traversal server) side of the traversal zone first
Firstly, log on to your VCS-e and create a user ID and password that the VCS-c can authenticate its traversal zone agains, go to Configuration > Authentication  > Devices > local DB. Create a user ID and password and make sure you take a note of it, because you will need the creds later on on the VCS-c.

Now go to Configuration > Zone > Create new zone and choose Type= Traversal server

fill out the username that you created in the previous step. port is 7001 transport is TLS (which is the default)

The TLS verify subject name is the FQDN and is the verify Subject name that is in the X.509 certificate of your VCS-e (make sure you have all your certs uploaded)

Fig.,1 VCS-e traversal zone config



Configure VCS-c
So go to your VCS-c: Configuration> Zones > create new zone and choose Type=Traversal Client.

Fig.2 VCS-e traversal zone config




Once you have configured both ends of the traversal zone, on the VCS-c your VCS-e should be shown as GREEN and connected to port 7001 (see Fig.2)


Of course the traversal zone is no go is nothing points to it, so you will need to point a search rule to it, to actually direct calls to use the traversal zone.


On your VCS-control you need a search rule that point all call directed to external URI;s (pretty much everything that is not withing one of your company's domain names). (Its very much like a default route to towards the internet). As per below:




So in the reverse on the VCS-e you need to create a search rule pointing across the traversal zone to for instance *@domainname.com

Namaste!






 

Sunday, 9 April 2017

Cisco Unified Call Manager audio codec preference lists and some SIP

I would like to take the opportunity to give a brief overview on codec preference lists. How to apply them and when you might be using them. 

the reason I had to dig into the art of codec preference lists is because I was working on an issue where I wanted to force calls into Webex to establish, using G711ulaw, instead of the G722 it was using.  in order for me to achieve this, I had to add a separate codec preference list in the regions configuration web page in CUCM.

What are audio codec preference lists?

Codec preference list are list that define the preferred codecs to be used by a telephony endpoint in descending preference. So the top codec in the preference list is the most preferred codec.

These can be configured, by going to System > Region > Codec Preference list.

First time you navigate to the codec preference list you will see two default lists already pre-populated:

-Factory default lossy
-Factory default low loss


Don't worry to much about these existing lists, just copy them and rename them to make your own list. So I created on that forces G711 over G722, by bringing G711 codecs to the top of the list and demoting G722 to the lower parts of the list.


Fig.1 customised codec preference list, preferring G711 over G722

Now before I continue, let me explain when these codec preference list kick in.
Audio codec preference lists are only invoked for an inter region call. So when a phone in region A calls a phone in region B, the codec preference list defined in the regions relation ship between these two regions, will be used. So to make this clear when a call is attempted between two phones in the same region, the audio codec preferenc list will NOT be invoked and the codec preference inherent to the phones will be used. For instance two SX20's in the same region will establish a call using the AAC-LD codec.

Typically, endpoints send their codec capabilities inside an SDP in descending order of preference as per RFC5939

https://tools.ietf.org/html/rfc5939 


How to apply preference lists?

Back to our example. Before I put in the codec preference list the Early Offer INVITE contained the following codecs:


Figure 2, SDP preferring G722


Now, i will force the codec list from Figure 1, to be used between region Force_g711 and WA_Perth  , by changing the region relation as follows:




Figure 3,  assigning codec preference between regions


By assigning the G711 preferred codec list to be invoked between the two regions as can be seen in Figure 3.


Now, let test a call between the two regions and see what the SIP SDP contains.
The originating Early offer of the calling phone, will still contain an SDP with G.722 as the most preferred codec (as in Figure 2), nothing new so far. The second part of the signalling path; between CUCM and the called phone, uses delayed offer. The called phone responds with an SDP in the 200 OK, containing its codec preference:


Again G.722 on top and then G.711. So far this is all very expected as the phones advertise their own codec preference completely independent of the region relationships they are part of. At this stage CUCM know the codec preference of both phones and will now make a decision based on its local codec preference list. Because G711 is the preferred codec, CUCM now sends a 200 ACK (remember it is responding to Delayed Offer) to the following phone,  with the following SDP:



And with this this the deal is done, CUCM answers with a single codec: G711. Through its preference list, G711 is higher than G722 and thus signals G711 back to both phones, forcing them to use G711. The 200 ACK that goes back to the calling phone, again, contains only G711 in its SDP.

At this stage all phones are forced to set up an RTP stream using G711.







x

Wednesday, 21 December 2016

Cisco SIP CUBE Media Flow Through v. Media Flow Around

Whoever has ever set up a Cisco CUBE might know that these suckers can be set up as either "Media Flow Through" and "Media Flow Around". If you have never set up a CUBE, or were simply not aware that this option even existed or you don't know what the difference is; sit tight and let me do the talking.

The terms Media flow around and Media Flow through, are refering  to, you have guessed it; Media, so Video and Voice. In protocol terms; RTP and H264 (for video for example).


With flow around, the CUBE sits in the signalling path between the calling and called end point. The signalling is aimed at setting up an RTP stream directly between the two endpoints. and once established, the CUBE is no longer required unless additional signalling is required.



Fig 1. - Cisco CUBE Media flow around

With Media Flow through, you guessed it, the RTP stream is set up through the CUBE. This means, that the RTP stream is broken up in to parts: from phone A to CUBE, and from CUBE to phone B.  This means that all RTP packets flow through the CUBE.


How to configure this?

Easy; go into voice service voip on your CUBE:

voipgw(conf-voi-serv)#media ?
  flow-around     The media is to flow around the gateway
  flow-through    The media is to flow through the gateway

The default is media flow through, and once configure will not be visible under voice service voip if you do a show run
  
How would I decide whats best?

Read my lips; there are no silver bullets. So the answer is; it depends. If you are not worried by what the CUBE will use, then, fine, let the default do its work, which is flow through.  Flow through allows you to hide your IP addresses of your phones/endpoints, because your only your CUBE will be announced in the SIP signalling towards your SIP provider. Of course this means, that in terms of routing, your provider will only need to be able to have connectivity to your CUBE's IP address. With flow around this is a whole different matter, because your SIP provider will need to be able to route to each and every phone/endpoint's IP address. This requirement might not suite everyone, or might simply not be possible for a number of reasons.

The second item you need to be aware of when choosing between flow through or around, is bandwidth.  If you for instance deployed a centralised CUBE in on of your data centers, and you use flow through ALL the RTP streams will go through the WAN link to the CUBE and will go out again to be terminated on the actual phone/endpoint.  With flow around this is not the case, because when an external caller calls a phone in branch A, the SIP signalling will be dealt with by the CUBE and the negotiated RTP stream that is part of that call, will be between your SIP provider and the endpoint in Branch A. So the RTP stream will NOT flow through the WAN link of the data centre where the CUBE is at.  You will also keep this in mind when designing your QoS policies. Always keep in the back of your head how RTP actually traverses your WAN, in order for it to connect to the PSTN.


One other reason to use flow through is that you might want to use the CUBE's ability to transcode CODECS, through its Local Transcoding Interface  (LTI).

http://www.cisco.com/c/en/us/support/docs/voice-unified-communications/unified-border-element/115018-configure-cube-lti.html

One last that is worth mentioning, is the added complexity of flow around in terms of SIP 'signalling'. With flow around, SIP is more complex, flow around causes re-invites for instance to signal the phone's IP address. Whereas, flow through does not require this.


How to verify how my CUBE deal with media flow?

There are a few things you can do to check how media flow through your CUBE or around your CUBE. You can debug ccsip messages. This will tell you the contents of your SIP SDP , which will contain an IP address of where the RTP stream should be terminated. This could be the IP address of the CUBE itself (in which case you will most likely be using flow through) or the IP address of a phone/end point in which case your CUBE will most likely be using flow around.  So basically what you need to look at, is the SDP that the CUBE sends TO your SIP provider. If it contains the IP address of your CUBE, its flow though, if it is the IP address of an endpoint, flow around will be attempted.

SDP example below, check the IP address of the RTP connection  c=  



There is an easier way to verify, make a phone call across the CUBE in question, keep the call up and go to CUCM and browse to the phone's IP once you have found the phone that has the current call. Now go to stream 1, for example:




These streaming statistics will tell you straight away between which two IP addresses the RTP stream is set up, if the CUBE's IP address if not in there, then you are definitely using flow around.

Also you could issue the following command on your CUBE:

show voip rtp connections

This will show you the IP addresses of the call leg(s).


This pretty much wraps it up, keep your comments coming.

Monday, 17 October 2016

Troubleshooting network congestion problems; hints and tips.

Anyone in an IT Networks operational role would have, at some stage of their career, been involved in looking into an issue related to network latency. Most IT engineers also know that issues related to high response times, overutilisation and network bottlenecks can be notoriously hard to crack. This post is by no means a silver bullet, I am merely trying to attempt to hand out some tools, tips and methodologies with which you can equip yourself. Other engineers have other tools, that might work just as well, it's up to you to combine these and come up what works best for you. I am just describing how I do things, if you have any comments please leave them at the end of this post and I will always consider your input. So let's get cracking.


The first thing you need to do, when someone reports a network performance related issue, is to ask questions. End users will never give you technically relevant information, to the contrary, they will provide you with a symptom.  like, "my internet is slow", or "pulling down files from the payroll server share, takes ages". A good engineer does not get annoyed by these types of problem descriptions. It is up to the engineer to distill some more relevant information out of them and use the power of elimination, so ask questions like:
  • When did this start happening (time)?
  • Does everyone in your office suffer this issue, if so who else?
  • Is it just internet, or are there any other applications that you are trying to access that show bad response, if so which one.
Anything that could possibly exclude causes and decrease possibilities should be asked. There are simply no rules when it comes to eliminating possible causes. This is probably the single most part of of your diagnostics, because not asking the right question could lead to spending large amounts of times analyzing useless information, that will never get you any closer to identifying a cause.

I am assuming that you have some sort of network monitoring tool available, for instance Solarwinds NPM, NPM or some sort of freeware thingy you pulled down, it doesn't really matter, what you really need to be looking at is some of the following symptoms on whatever tool you are using. Again the aim is it to eliminate links, devices and ports. 

You could look at the following items:
  • response times v time
  • traffic patterns
  • switch throughput
  • port trough put

Particularly port throughput can be very useful information to locate bottle necks. I typically set up my network platforms to set up throughput reporting on trunk links and links to WAN routers and ESXi hosts, as this is typically where various traffic types aggregate. (There is very little point in continuously monitoring access ports continuously). Once you have identified the source of the bottleneck, for instance if you have a 10Mbps WAN link and you see a monitored trunk port pumping out 10Mbps of traffic at a certain time, then you would want to drill into that trunk port (let's call it trunk A) and quantify its traffic.

Knowing you OSI-model can be very beneficial diagnosing these types of issues, so let's continue with our example. After you have successfully identified which Layer 1 link is the bottle neck (trunk A), you would then like to know who or what is generating all that traffic. Could be a user, not aware of google, torrenting all editions of Encyclopedia Britannica in pdf, or some big ass database, trying to replicate with an off site peer, could be anything really. 

The next step would be to use a tool that can do more in depth analysis of Layer 3 (IP) and 4 (TCP/UDP) traffic and that can identify top talkers. Let me mention a few methods and tools:

  • ip accounting, can be configured on Cisco devices and applied to interfaces, very rudimentary and with the drawback that is can only be configured on a layer 3 interface, so this might not be usefull on a pure L2 trunk port for instance.
  • net flow, can be configured on a global level, and does not need to be tied to an interface, can work in conjunction with a management tool by loading its data into an external data base. Netflow will instantly provide you with a list of top talkers and TCP/IP streams but really needs an external data base for intelligent data collection
  • Wireshark  any network engineer should have in their tool box, if you don't have it, go download it! There is no cost and its the best in the business.

ip accounting is perhaps the easiest one to quickly spin up but it has its limitations. Netflow, pretty good too, but not as lightweight as Wireshark.

So let us consider using Wireshark for network congestion analysis.Now  you have pin pointed the port that carries most of the traffic and is maxing out your WAN link. The next step would be to find out what sort of traffic it is and even better tie it to a process or app running somewhere locally (a file share, an illegal torrent server, could be anything).
First thing you would need to do it drag yourself over to the troubled location or switch and configure a SPAN the port (trunk A in our example) on that same switch to which you connect your laptop with the wireshark client. Capture all traffic for a relatively short amount of time, when the congestion is occurring.

Once you stop your capture, after a minute or two, in Wireshark, go to:

Fig.1 - End list summary Wireshark

The next screen shows displays an example of the top endpoints generating most of the traffic, including IP address. 


Fig.2 - Top talkers summary
You could, if you wanted, go straight to TCP in Figure 1., but I leave that up to you. Now you know the main traffic generator's IP address, its is possible  that you could have an idea of what application or process is causing the problem. Let us for the sake of this post assume that you don't, which means you will need to go into the next step.

 For the next step, I recommend using TCPView, by the boys from SysInternal (well these boys were bought out by MS ages ago, so they will be on some Carribean beach, but anyway).  TCPView is free of charge and is simply a great tool.

Below is an example of what TCPView looks like


Essentially TCPView is a graphical representation of running netstat on the command linea and tying TCP/IP sockets  to Program IDs.  The good thing about TCPView is, it will let you sort a particular socket based on in out bytes, allowing you to identify your top bandwidth hungry processes.

Now you have done that, possibly turn off the culprit process and see if your bottlenecks see decreased traffic, just for testing.

As I said at the starts this post just describes a bunch of thoughts and a methodology that I find useful and I think are definitely worth sharing. I'd love to hear your feedback and suggestions   

Good luck


Sources: