While there may be subtle details changed by the February 2010 CTP, the broad thrust of how F# looks to a C# directed tool remains. The mass of compiler inserted locals which skews the analysis certainly containues to hold.
First off, a little confession -- while I'm a C# coder by day, by night I use a whole variety of other languages instead; so I don't have a large corpus of my own code to use it against; and having installed my gift copy of NDepend v2.12.1.3123 (Pro Edition) on my personal development machine at home, there are certain ethical issues involved in getting it to run over code from the day job as-is.
I do have an on-going project in F#, which at least the tool will consume after a fashion (as the tool doesn't know .fsproj files from Greek, analysis has to proceed by selecting a number of assemblies to work with), so my experiences are based on that.
First impressions
At one extreme, tools such as FxCop or StyleCop provide a report that indicates a number of point changes to make to code with pretty much zero configuration to the tools; at the other extreme, nUnit and nCover are things that require you to configure (code) pretty much everything, and responding to the output is a matter of judgement. NDepend is somewhere in the middle -- you can run it with default settings and get some analysis of your code, but the settings are completely malleable; and the response can in places be a judgement call too.
There are some rules in the default set which overlap, and in places contradict, guidance from FxCop or StyleCop -- especially in matters like naming conventions. This latter is the stuff of which religious arguments are made (my own choice being to leave policing of names to the MSFT tools). Similarly, it invents a method-level NDepend.CQL.GeneratedAttribute
to mark generated code when we already have System.Runtime.CompilerServices.CompilerGeneratedAttribute
and the class-applicable System.CodeDom.Compiler.GeneratedCodeAttribute
Some of the conditions tested (setting aside the false positives out of F#) are useful complements to FxCop, such as flagging places where code visibility could beneficially be adjusted to better encapsulate code, and are essentially resolved by point fixes in many cases.
The real value comes in the rules that examine the structure of the code (even if, again, the precise thresholds used are potentially subjects for debate).
Live test
I ran NDepend against the latest drop of my FxCop rules project; this is possibly a small sample of code, and in some respects an edge case (with strong coupling to framework or FxCop classes, and very low abstraction).
Being F#, I have to just supply the assemblies, and cannot get source level analysis (such as cyclomatic complexity or comment density metrics).
Aside from the language difficulty the one place I have had trouble is making sense of the coverage file integration -- the nCover output I supply seems to get ignored; and the linked-to tutorial refers to such an earlier version that the UI doesn't match at all.
Interpretation (and F# peculiarities)
With no interfaces defined in the project, I seem to trigger a fencepost error in the report for class statistics
Stat | # Occurrences | Avg | StdDev | Max |
Properties on Interfaces | 0 Interfaces | 0 | 0 | -1 properties on |
Methods on Interfaces | 0 Interfaces | 0 | 0 | -1 methods on |
Arguments on Methods on Interfaces | 0 Methods | 0 | 0 | -1 arguments on |
and there are some places where the names of F# methods may be problematic -- for a number of rules I get a "WARNING: The following CQL constraint is not satisfied. 1 methods on 358 tested match the condition.", but the name of the offending method I have to find for myself:
methods | # lines of code (LOC) | Full Name |
Sum: | 2 | |
Average: | 2 | |
Minimum: | 2 | |
Maximum: | 2 | |
Standard deviation: | 0 | |
Variance: | 0 | |
As I've been posting in my "F# under the covers" series of posts, the IL that gets generated is, shall we say, interesting; and that is one of the things that gets shown up by rules tuned to C#. Indeed, the very first run I made was what provoked a one-shot into an on-going series. For example, this method
has 11 variables and a cyclomatic complexity (in IL) of 12
That method -- which is used in
where >?>
and >+??
are combinators, gets detected as unused ("No Afferent Coupling -> The method is not used in the context of this application.")
A number of rules -- such as "A stateless type might be turned into a static type" -- show up quite how many compiler generated classes there are; both the ones with @-line-number names (e.g. Patterns+MapToTypes@41) for F# fun
s, and simple nested classes for algebraic types. In production use of NDepend on F#, these rules would have to be extended so such types could be filtered automatically.
For other false positives, such as the spurious numbers of locals, it would be beneficial to filter on a suppression attribute (along the lines of System.Diagnostics.CodeAnalysis.SuppressMessage
) with a string property that the rule could match on.
Overall
Another useful and interesting tool in the general .net developer's armoury. Even out of the box, I could see that it would provide useful automation to supplement a manual code review process, though it would require a degree of customization to fit into an established build and coding standards regime.