Pros and Cons
+ Much more flexibility in repository organization - putting together a build from pieces of main, contrib, and javafx, and an SVN repo on some random Kenai project would not require any special thought, and dividing main into pieces would have little effect on the build process. Here as elsewhere I am assuming we go all the way and make most module-module dependencies release dependencies rather than snapshot dependencies. (mkleint: Not sure what you mean by "all the way" but there's technically not difference in released and snapshot dependencies with regard to repositories. Yes, we could split main, we could assemble final application bits from repositories only, but OTOH having everything build together would be more complicated. Eg. it would be hard to say "build me nb main modules with javafx and some random Kenai project, all from sources")
+ Modules could be built and tested in isolation, then "promoted" if and when ready. Could relieve stress on continuous builders. (mkleint: See Versions Section below)
- No clear notion of a "daily build" when modules are composed from the repository, possibly confusing QE. Need somewhat different model for how bugs are reported and how their fixes are noted in bug reports. Need policy for when module releases are made - after every change? Every bug fix only? After fixing P2 bugs only? Weekly? At developer's option but at least before milestones? (mkleint: See Versions Section below)
+ Various miscellaneous build steps - ANTLR parser generation, etc. - could be handled by regular Maven plugins rather than requiring special support in the harness. Would avoid the need to discuss whether we should implement Cobertura support and so on; should come more or less for free. Easier support for writing parts of modules in Scala or Groovy, etc.
- Not obvious how to do a binary test distribution, though perhaps other aspects of a Mavenized build could convince QE to drop this requirement. (Currently would be unwieldy to do a full source build just to run a module's tests on a specific platform; but if you only need to do a source build of that module in isolation it may be more practical.) (mkleint: one can push the tests jar to remote repository next to the main project artifact, or one would split the code and have a special purpose testing project(s) for such bits)
- Unit and functional tests would need to be combined into the same src/test/java. May not be a problem since these now use the same plain JUnit execution infrastructure, and functional tests not closely tied to a module could also be moved into independent Maven modules. (mkleint: the way integration/functional tests are handled often is to have a special project for them, not sure it's reasonably for all functional tests we have..)
- Module release process could be slow. My experience publishing releases of a small Hudson plugin to java.net is that it takes around 5 minutes, which seems excessive. May be due in part to awful java.net performance. (mkleint: I think it's related to the svn based repository. I had a similar "slow" experience on kenai. However webdav based repositories are reasonably fast)
- maven-release-plugin support for Mercurial may be poor. (mkleint: Agreed. The way release plugin works with svn is not easily transplantable to hg. For each release it creates a new branch/tag, effectively copying the sources. In hg world it would mean forking a new clone. and there's no way to branch/tag just part of the hg repository. So each module's release could trigger new tag/branch on the whole repo (tag probably cheaper) For more, See Versions section below)
+ Get source associations for all modules, and Javadoc associations for API modules, for free (as JARs in Maven repo) without needing to check out NB sources from VCS.
+ JNLP support for the IDE/Platform could very likely be improved using Maven's model; needs study. (mkleint: study and probably also work, currently some jnlp works, but needs tweaking/work on client side. The major obstacles seem to be (re-)signing of modules, the actual remote deployment setup etc..)
+ Easier to distribute test utilities such as MockLookup - just make them into their own Maven modules, which can be freely depended upon with test scope.
+ No need for IDE distribution to include unused autoloads such as openide.options, nor to include apisupport-oriented test libraries like nbjunit; anyone developing modules who declares deps on these will get them from the network repository. (mkleint: +1, however someone will have to build them and push to the repositories anyway, RE only job probably)
- May be issues with standard Maven plugins that we have a hard time resolving promptly - we would need to wait for them to be fixed upstream. For example, annotation processing support in maven-compiler-plugin is currently spotty. Forking standard plugins to include fixes we need may be an option, or we may be able to work around some problems of default settings by having the NBM packaging type define some defaults (Hudson's HPI packaging does something like this if I recall correctly).
- Builds can be interrupted by network outages unless you set up a local repository. (mkleint: not sure what local means, but presumably we could have a Nexus Repository Manager at each site that would do proxying and local caching. That would not only defend against failures, but also speed up building significantly. We would also need it for uploading the external bits that are not present in any public repository)
+ May be easier to deal with third-party binary libraries, especially those which are simply used by NB module code. Maven may not help with software packages which need to be copied into the IDE distribution for end users.
+ OSGi interoperability would likely be simpler to achieve.
- Developing API changes in tandem with client usages of those API would become more complex. You would need to temporarily switch a copy of the client module to use a snapshot dependency on the API in order to do back-and-forth development. When finished you would commit the API change, publish a release of that module, then commit the client module changes. (mkleint: See Versions Section below)
+ Developer & QE view of module timelines would much more closely match what a user running a dev build and updating via Plugin Manager would see. Likewise if we do JNLP publishing.
+ Impossible to forget to declare updated dependencies when using a new API: you need to declare them in order to compile. (Does not apply to behavioral dependencies, unless these are checked during unit tests.)
+ Build process can enforce API signature compatibility with previous module release.
+ mkleint: The current system of Team repositories might become uneffective, contraproductive and could be abandoned.
+ Classpath scanning when opening a module with many dependencies would be far quicker: only JARs need to be checked for recent changes, which there would never be any of unless you had changed the POM to request new versions of dependencies. (In the current system, all dependencies are against source projects, so the Java indexer scans dozens of source trees to make sure nothing in your module has been broken.)
There are probably just 2 ways of handling versions. Either use -SNAPSHOT ones or fixed release versions. The general recommendation is to use -SNAPSHOT for things that are built and released together. Which would actually apply to the whole of netbeans then. I suppose collateral codelines like javafx could be separately versioned though. Or Modules in "maintainance mode" that closely noone ever touches. But still the code's trunk should always be -SNAPSHOT versioned. Should never arrive to a situation when you have 2 distinct binaries with the same release number. We should never ever overwrite the released bits in remote repositories.
jglick: disagreed; using snapshot dependencies invalidates most of the pros listed above! What I am proposing is that snapshots never be in public repos (the usual Maven policy) and that in order to "publish" changes you have been making to a module, even in development builds, you do a normal Maven release. (Teams would be free to use snapshot dependencies among a small set of related modules in the same versioning root; these would be released as a group.) An IDE release would then be a grouping of module releases which has been verified to be of high quality. TBD what the mechanism would be for offering subsets of this grouping as upstream dependencies to third parties, but I think that is a manageable problem.
Maven version vs specification version
Conceptually there are the exact opposites. in Maven you get multiple 1.0-SNAPSHOT binaries, until you reach a point in time when you perform one 1.0 binary, then you develop 1.1-SNAPSHOT. With spec versions, you get multiple different 1.0 binaries, until you decide to bump the version and then you get multiple 1.1 bits. We cannot possibly implant that sort of versioning into maven. So I suggest we keep them separate. I suggest we have a single maven version for all netbeans modules within a release. Because the release is the only actual cutting point at netbeans lifecycle when it reaches a certain quality. If we keep cutting small module scale released during the development, these would by far not have comparable quality to the one that gets used in the final bits. What's the point of releasing those then anyway? who will consume them? Having one version for all modules also makes upgrades for external people much simpler.
The way the nbm-maven-plugin currently works, is that it's using the spec version of it's dependencies and generates the manifest dependencies section according to that. So all modules always depend on the latest supported bits. I don't see a point in declaring that your module can depend on old version of something else while it always ships with the latest bits anyway. Also please note that if we would depend on older versions, then the resolution process of what version to actually use in a build could take considerate amount of time. So again I'm promoting a single version for all approach, always depend on the latest bits.