Greetings-
I have a number of SAS-M/Mxp nodes running 20.3 firmware. I'm having MTU issues when trying to bring them into NSP/NFMP for management. Turns out that the System interface MTU is ignored and the device assumes the MTU of the upstream port. Since that's inband management on the same upstream port that has the customer service(s) on it, I can't set the MTU down to 1500 on the upstream port without affecting the service(s). Thusly, I find myself needing to move the link IPv4 address from a router Base interface to a service IES interface so I can set the IP-MTU there and treat the inband management traffic as a service.
I've tested this many times in the lab and everything works as expected. I'm ready to start rolling this out, but I want to minimize the possibility that a script failure will cause loss of management to the device. My script basically shuts the router interface, then creates the IES and IES interface with the same IP as the original interface, and using the upstream port/vlan as a SAP.
What I'm trying to figure out is the following:
1. Can I parse the output of commands like '/show router Base interface' to determine if the existence of an interface named 'mgmt' ?
2. If such exists, can I parse the output of '/show router Base interface "mgmt" detail' to obtain the IPv4 address and mask? $ipv4Address/$ipv4Mask
3. Can I also parse that output to get the port and vlan used for the uplink? $port:$vlan
4. Can I parse the output of a command like '/show router Base route-table' to find the default gateway address? $defaultGateway
## at this point, insert script/code to move the $ipv4Address/$ipv4Mask and $port:$vlan to the new IES.
5. Can I parse the output of a command like 'ping $defaultGateway' to determine if the gateway is reachable?
6. Can I create a conditional statement based on the result of the previous ping command that will either exit the script (Script ran ok, no issues), or continue with script/code to roll back the config to the configuration prior to the change?
7. Test the gateway ping again then exit with messages to indicate the script failed and rolled back successfully (ping works), or the script failed and rollback failed (ping still fails)?
8 end script
TiMOS/SROS 20.3 on the 7210 SAS-M/Mxp doesn't appear to have the python functions. I opened a ticket with support, who gave me a little bit of info then pointed me at the System Management Guide for 24.9. The only thing I saw in there with regards to scripting is the stuff about setting up scripts with script-control and script-policy config.
I have otherwise been unable to find any documentation or examples, that might help me figure out the answers to my questions.
The aim here is to get all of the affected devices into NSP/NFMP so that I can then upgrade them to 24.9.
Thanks-
J
Scripting on 7210?
Re: Scripting on 7210?
Hi,
Have done similar before - ie flipping the connectivity from network i/f to IES and back - using an exec config, I also have a SAS-MXp here in the labs too that can test with - but yeah scripting is very limited on them.
I take it for all your parsing questions, you are referring to doing this from a central location that has access to the SAS, or are you hoping to run these locally? Obvs the ping default gateway etc would only work from a central location if your migration had been successful in the first instance and could access back to the SAS to make the check.
I think safest way would be to use a combination of an exec config and the cron function.
So create an exec file on cf1: with the main change:
add your specific change:
And another exec file for rollback:
add the following:
Then use script-control:
Then create cron jobs to make the change with a rollback maybe 5 mins later. Then if you lose the node, the rollback should revert your change, otherwise, if you maintain management, you just shutdown the rollback cron job and then delete etc.
This may be a safe way to carry out your changes subject to testing.
Just out of interest, what is your actual available IP MTU and what type of MTUs are on your services? I assume you are running the port in hybrid mode. Does fragmentation work ok if you ping large packets etc? Most issues I have had with NFM-P connectivity and MTU have been some underlying layer 2 device in path with lower MTU - this becomes obvious then when pinging at large packet sizes.
What is your SNMP max packet size set to? Have you tried setting this lower just so that your packets get there?
Thanks
Paramount
Have done similar before - ie flipping the connectivity from network i/f to IES and back - using an exec config, I also have a SAS-MXp here in the labs too that can test with - but yeah scripting is very limited on them.
I take it for all your parsing questions, you are referring to doing this from a central location that has access to the SAS, or are you hoping to run these locally? Obvs the ping default gateway etc would only work from a central location if your migration had been successful in the first instance and could access back to the SAS to make the check.
I think safest way would be to use a combination of an exec config and the cron function.
So create an exec file on cf1: with the main change:
Code: Select all
file vi test-exec.cfgCode: Select all
/admin rollback save comment "pre-change config"
/configure router interface xxx shutdown
/configure service ies etc etcCode: Select all
file vi test-rollback.cfgCode: Select all
/admin rollback revert latest-rb nowCode: Select all
*A:PNL-SASMX-001>config>system>script-control# info
----------------------------------------------
script "test-exec"
location "cf1:/test-exec.cfg"
no shutdown
exit
script "test-rollback"
location "cf1:test-rollback.cfg/"
no shutdown
exit
script-policy "exec1"
results "cf1:/results"
script "test-exec"
no shutdown
exit
script-policy "rollback1"
script "test-rollback"
no shutdown
exitCode: Select all
*A:PNL-SASMX-001>config>system>cron# info
----------------------------------------------
schedule "exec" owner "paramount"
description "test-exec"
count 1
script-policy "exec1"
type calendar
day-of-month all
hour 11
minute 35
month february
weekday monday
end-time 2026/02/10 10:00
no shutdown
exit
schedule "rollback" owner "paramount"
description "test-rollback"
count 1
script-policy "rollback1"
type calendar
day-of-month all
hour 11
minute 40
month february
weekday monday
end-time 2026/02/10 10:00
no shutdown
exitJust out of interest, what is your actual available IP MTU and what type of MTUs are on your services? I assume you are running the port in hybrid mode. Does fragmentation work ok if you ping large packets etc? Most issues I have had with NFM-P connectivity and MTU have been some underlying layer 2 device in path with lower MTU - this becomes obvious then when pinging at large packet sizes.
What is your SNMP max packet size set to? Have you tried setting this lower just so that your packets get there?
Thanks
Paramount
