Nmap Development mailing list archives

Call for feedback: Nmap regression testing and the future of nmap-nsock-scan


From: Jacek Wielemborek <d33tah () gmail com>
Date: Tue, 22 Jul 2014 22:44:26 +0200

List,

Two weeks ago I had a conversation with my Google Summer of Code mentor,
David Fifield. During the meeting we agreed that my project (nsock-based
-sT port scanning) is - for numerous reasons - impossible to complete by
the end of GSoC. In order to prepare the code for merging, I would need
to perform a few tasks that are very time-consuming, such as:

* Feature implementation - as of today, nsock_scan.cc is made of 988
lines of C++ code, with support for quite a lot of Nmap's features.
However, even though I spent many weeks trying to figure it out, its
congestion control system behavior is not consistent with the original
one, sometimes resulting in slower scans. There's also missing support
for --min-rate and --max-rate, probe canaries (the probe Nmap sends
every 1.5 second if we get no replies), rate limiting detection and
probably some other features. Implementing these alone would most likely
take me more time than there is within GSoC timeframe.

* Testing - replacing current -sT code with nmap-nsock-scan's would mean
that all of its functionality that was there before is kept and works as
well as it used to. As a consequence of that, we'd need more and better
automated regression tests that take a lot of time to prepare (and
actually test as well so that we know they're actually testing
properly). Also, it would be necessary to issue a call for testing and
wait for some time (probably about two weeks) for users to find some
inevitable bugs.

* Passing code review, debugging, documenting and merging - both code
review and debugging the bugs detected in prior testing could pose
challenges that would require reengineering parts of my code, which
would take a significant amount of additional time. In theory,
documenting and merging the code should be rather easy in my case, but I
would prefer to count at least a week for these.

As you can see, the amount of work that is left is well beyond the
remaining four weeks of Google Summer of Code, which means that the
feature won't be merged by the end of the program. As a result of that,
David asked me if I could come up with something that would already be
usable by that date. My regression testing script came to my mind, which
already can automatically build various branches of Nmap under Linux,
scan scanme.nmap.org using the resulting binaries, compare the results
and e-mail a report. The report includes the commands that were
executed, information about the timings and whether the port information
was consistent or not (if not, a nicely-looking diff is generated so
that you can quickly tell if it's the Linux ephemeral ports bug or
something even more serious). In addition to that, a plot that shows
both per-host and per-group active probe count and congestion control
variables such as cwnd or ssthresh. The plot is attached to the e-mail
and multiple recipients can be specified. I also created an environment
that can be used to measure the relationship between packet drop
percentage and Nmap accuracy which consists of two virtual machines, a
script that executes commands over SSH and interprets their output and a
Scapy script that SYN+ACKs every SYN packet so that any connection
attempt that made it to the scanned host would come up as an open port.

The scripts I mentioned have a few shortcomings though and since there's
not much time left I decided to list them there and ask other developers
for feedback. Here are my ideas for features I could implement:

* Code coverage statistics: compile the code with gcov support and
calculate the code coverage for the tests that are already there. This
would be an incentive to increase the code coverage of the script,

* Integrate the tests involving other VMs: so far, the tests that rely
on two VMs connected to each other are held in a separate script that
needs to be called manually. I could integrate it and try writing a
framework for tests that makes it easy to sniff for various packet
properties necessary for a test case to succeed,

* Test the program under other Unixes: my script currently relies on an
Unix environment, but I hadn't yet tested if any GNUisms sneaked in. I
could try running it under some BSD flavors and see if it works and if
not, try correcting it,

* Add Windows support: building Nmap under Windows is completely
different compared to any Unix system. Even if I relied on Cygwin (which
is tempting, because it would be simpler), I would need to rewrite the
building code to make it use Microsoft Visual Studio. I'm not convinced
how useful that would be without a Windows server though,

* Just write more test cases: either add more test cases that involve
-sT port scanning that could prove useful during the development of
nmap-nsock-scan after GSoC or start writing simple tests for other use
cases.

Which of the features do you find most interesting? Or maybe you have
your own ideas? I obviously can't promise to implement them all, but I
will consider the opinions from this thread. This doesn't mean that I'll
stop working on nmap-nsock-scan - it's just mostly my attempt to
prioritize things.

So, what do you think?
Jacek Wielemborek

PS. Feel invited to take a look at the code!
https://svn.nmap.org/nmap-exp/d33tah/nmap-portscan-tests/tester.py

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Sent through the dev mailing list
http://nmap.org/mailman/listinfo/dev
Archived at http://seclists.org/nmap-dev/

Current thread: