Recently I was investigating alerts that were being generated for inbound interface discards on multiple interfaces and multiple Vyatta 5400 devices. There were not any noticeable performance issues on traffic passing through the devices. The discards would report in SNMP, show interface ethernet ethX, and ifconfig outputs. An example show interface ethernet ethX output I was reviewing is below. vyatta@FW01:~$ sh int ethernet eth0 eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:50:56:0x:0x:0x brd ff:ff:ff:ff:ff:ff inet 172.
I find the interface discard counter a deceptively complex counter. When you ask people what the counter means the usual answer is that you are over running the throughput capability of an interface. Which matched pretty closely to the definition in the IF-MIB SNMP MIB. The number of inbound packets which were chosen to be discarded even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.
Like a lot people I regularly have the problem at the end of a workday or even a workweek answering the question, “What did I do?” let alone what “What did I accomplish?” To find an answer for these questions I have started to keep a daily journal using both an automated report and a manual entry. Between these two entries I tend to have a good idea of what occurred during my workday and work week.
I have been thinking about an old issue that a customer encountered with an pair of Nexus 7000 switches about a year and half ago. When the issue first came onto my radar it was in a bad place, this customer had Nexus 2000 Fabric Extenders that would go offline and eventually the Nexus 7000 would go offline causing some single homed devices to be come in reachable, and in the process broader reachability issues.
Lately I’ve been very interested in academic side of computers. Complex systems, Theoretical Computing, and Control Theory are two of my focuses right now. This has come about because I’m getting more interested in how the systems work and how ti measure them, more then how to implement them. My career has been very focus on the implementation then how systems work and can be measured. I’ve never had any sort of formal Computer Science education, making a lot of this new territory to me.
Over the last few days I’ve started to the play with the OpenDaylight Test VM Image. This image is was easy to get up and running and have a playground with mininet and a pre-baked OpenDaylight (ODL) controller to play with. After deploying the OVA file in Virtualbox poking around the file system I got down to “business” with getting a test topology in place. I made some changes to initial mininet configuration startup file to make the topology more complex and changing the startup command to look like the following,
To start off I’ll cut past some of the marketing and state that PURE Systems are IBM BladeCenters with some predefined hardware configurations that support both x86 and POWER work loads. With that being said the advantage to the PURE architecture is the software that IBM has assembled to orchestrate deployments of workloads across all of the integrated platforms. The orchestrator is named Flex System Manager (FSM). The FSM plugs into VMWare for x86, HMC for Power systems and other management system for virtualization platforms.
Riverbed Steelhead devices have a method they use to find each other on the network. Riverbed has named this Enhanced Auto Discovery. This is intended to reduce time to deployment and simplify the configuration on the devices. The core of this method uses setting Options in the TCP headers within the initial 3 way handshake. There are a few concepts to go over to fully understand the process of Steelhead Auto Discovery.
So I have some catching up to do, so here are some photos from April! 8static 34 April 13th, 2013 7:00pm music: Br1ght Pr1mate (BOS) Note! (NYC) Dauragon (DC) Environmental Sound Collapse (CHI) visuals: Environmental Sound Collapse (CHI) workshop: Animal Style on soldering for modding / circuit bending Dauragon Br1ght Pr1mates Note!
This past month I attended Cisco Live in Orlando, FL with 20,000(?) of my fellow Network/Collaboration/Service Provider/Data Center engineers from all around the world. This was my first time attending, and I had a blast! There are a few themes I won’t talk much about in this post that were big topics at Cisco Live one of which is the Internet of Everything (IoE) as that is well covered, and well is really just Market-ecture-tastic.
This show is from last year, i’m just finally getting the images edited and posted Cheap Dinosaurs play Goblin Friday, October 12th, 2012 7:00pm at PhilaMOCA – All Ages! http://www.philamoca.org/ Cheap Dinosaurs (PHL) http://www.cheapdinosaurs.bandcamp.com/ Tom Guycot (PHL) http://www.soundcloud.com/tomguycot The Joint Chiefs of Math (PHL) http://www.thejointchiefsofmath.bandcamp.com/ NO CARRIER (NYC) http://www.no-carrier.com/ Full set of images: http://www.flickr.com/photos/su1droot/sets/72157633179177793 The Joint Chiefs of Math Tom Guycot Cheap Dinosaurs
8static 33 March 9th, 2013 at PhilaMOCA Full set on Flickr: http://www.flickr.com/photos/su1droot/sets/72157633015537907/ music: Nullsleep (NYC) Doomcloud (CT) Noisewaves (MI) Mechlo (POR) visuals: Batsly Adams (NYC) after-party: Radlib (CT) is back, DJing MOD files Mechlo Nullsleep Doomcloud
I think it makes sense to kick off this blog with a how did I get to where I’m at today. This post covers about 16 or so years (as of posting) of working in IT and Computers and well many years before them as a well….hobbyist. My first introduction to communications, I guess you could say networking, was calling BBSs and eventually installing and playing with BBS software. I learned a lot about modems and dialup communications.