nanog mailing list archives

Re: questions asked during network engineer interview


From: Peter Kristolaitis <alter3d () alter3d ca>
Date: Fri, 24 Jul 2020 03:59:42 -0400

On 2020-07-24 3:06 a.m., Mark Tinka wrote:

On 24/Jul/20 00:26, William Herrin wrote:

Many moons ago, I interviewed at Google. During one of the afternoon
sessions the interviewer and I spent about half an hour spitballing
approaches for system monitoring problem at scale. I no longer
remember the details. With a little over 15 minutes remaining he
handed me a marker and said, "Okay, now write code for that on the
whiteboard." For an abstract problem without foundation that I had
never considered prior to that discussion. I said, "I really don't
think I can do a credible job of that in the time we have." He says,
"Well it's okay to use pseudocode. Don't you want to try?" I think
you're missing the point dude. It's still an abstract problem and
after half an hour's discussion I might be ready to draw boxes and
arrows. I'm certainly not ready to reduce it to code.

I said, "No," and needless to say I didn't get an offer. And I'm okay
with that. I really didn't fancy making a career of competing to be
the first to write poorly considered software.

The booby prize for failing the interview was a Google coffee mug. I
still have it in storage somewhere.
Where the industrial revolution praised expertise, the digital
revolution rewards curiousity.

I prefer to have staff that are burdened with being curious, rather than
staff who think they don't. After all, all the information is already
out there. Having experience is just as important as being diligent to
obtain it.

Mark.

I would suggest that companies who follow FAANG-type development models actually value both expertise and curiosity, and also throw in the ability and willingness to rapidly iterate.  Certainly one can search Google for solutions to nearly any problem, but it takes expertise to take the bits you find and structure them in a way that makes sense for your particular problem -- both to solve the immediate problem and to make addition of future features or bug fixes easier.

I suspect the question posed to Mr. Herrin was intended to probe not just the expertise factor, but the iteration factor as well -- firstly, can you, with only partial requirements (or minimally-viable-product requirements), structure your code in such a way that it covers the currently-known requirements and reasonable design assumptions given the nature of the system (how does the control loop work?  are we collecting data by polling or pushing?  what layer is responsible for aggregation?  how do we define a new monitoring check?  what are the interface points with external systems?  how are alert thresholds set?)?   Secondly, after you're done that exercise, if we throw a (reasonable) new requirement at you, is your code well-structured enough that the change doesn't necessitate a complete rewrite?

I've never been to an interview where I received a 400-page design document that is blessed by all 18 major stakeholders before being asked to write code.    It's almost always either small, well-defined problems (which are often related to your understanding of algorithmic complexity) or an iterative design process as above.  In the latter case, the point isn't to write perfect and flawless code for version 1, it's to see how you write version 0.1alpha and then how you think about getting to version 5.

And, realistically, we're talking about an interview here.  There are time constraints, and no one (interviewer or interviewee) should expect a production-grade system as the output of some whiteboarding exercises.


Current thread: