Apache NetBeans

Sunday August 25, 2019

What to do with JavaFX and OpenJFX in Apache NetBeans?

If Apache NetBeans runs on JDK 8, a range of Ant-based JavaFX sample applications are available in NetBeans to help you get started and learn about JavaFX. However, if NetBeans does not run on JDK 8, the available Ant-based JavaFX samples don't work (can't be created) but there's no point in working on fixing that since from JDK 11 onwards JavaFX is no longer part of the JDK and Maven/Gradle-based OpenJFX samples are obvious candidates for integration into NetBeans instead.

However, how should that be handled in NetBeans? Before Apache NetBeans 11.1, there was no integration with OpenJFX. Only JavaFX projects and samples were built into NetBeans, which led to a great deal of confusion since when someone sets up an environment from scratch today, they're unlikely to have installed JDK 8. Much more likely, they'll have JDK 11 or 12 and then those JavaFX projects and samples in NetBeans cannot be used, i.e., when you try to create those samples, while running NetBeans on anything other than JDK 8, you're simply told in the wizard that you have the wrong JDK. And then you somehow need to find out that the best thing to do next is use the OpenJFX documentation to set up the OpenJFX samples in NetBeans.

That is suboptimal and so Gluon integrated their two sample applications into Apache NetBeans 11.1, i.e., in the most recent release:


That is a step forward but still suboptimal, as explained here by Jaroslav Tulach:


That entire new module is not needed. Literally, all that needs to be done is that this file needs to be updated with two new template registrations:


And, literally, this is all that needs to be added there, since the two OpenJFX samples are on Maven Central and as pointed out above, "NetBeans has a nice support for creating wizards over Maven archetypes."

@TemplateRegistration(folder = ArchetypeWizards.TEMPLATE_FOLDER, 
position = 925, 
displayName = "#LBL_Maven_FXML_Archetype", 
iconBase = "org/netbeans/modules/maven/resources/jaricon.png", 
description = "javafx.html")
@Messages("LBL_Maven_FXML_Archetype=FXML JavaFX Maven Archetype")
public static WizardDescriptor.InstantiatingIterator<?> openJFXFML() {
    return ArchetypeWizards.definedArchetype("org.openjfx", "javafx-archetype-fxml", "0.0.2", null, LBL_Maven_FXML_Archetype());

@TemplateRegistration(folder = ArchetypeWizards.TEMPLATE_FOLDER, 
position = 926, 
displayName = "#LBL_Maven_Simple_Archetype", 
iconBase = "org/netbeans/modules/maven/resources/jaricon.png", 
description = "javafx.html")
@Messages("LBL_Maven_Simple_Archetype=Simple JavaFX Maven Archetype")
public static WizardDescriptor.InstantiatingIterator<?> openJFXSimple() {
    return ArchetypeWizards.definedArchetype("org.openjfx", "javafx-archetype-simple", "0.0.2", null, LBL_Maven_Simple_Archetype());

That literally is all that is needed to be added to the Java source file above, instead of having a completely new module, which doesn't integrate as neatly as the above with the Apache NetBeans infrastructure. (And this is a small tip for anyone else wanting to make their Maven archetypes available to NetBeans: the above is literally all you need to do.)

However, the fundamental question remains: how do we notify users of Apache NetBeans that they should be using OpenJFX and not JavaFX? Maybe we should simply remove all JavaFX projects and samples, however that would be unfortunate for anyone using JDK 8. Or maybe the solution is to create a category named "Legacy" in the New Project dialog and then put all JavaFX projects and samples there, so that it's clear that they're not recommended, while still having them available for JDK 8 users?

Saturday August 24, 2019

Simplified Apache NetBeans Welcome Screen

To simplify the Welcome Screen and, in particular, replace all links to netbeans.org with netbeans.apache.org, I have created this issue and pull request:



All references to netbeans.org are replaced with equivalents at netbeans.apache.org and the News column, which pointed to netbeans.org, is removed from the tab below, while the Blogs column is renamed to News, since newsworthy items now come from here, i.e., from this blog:

Also, the Featured Demo on the first tab is removed, best to have as few links to external places as possible, i.e., help reduce potential points of failure, especially here where having that demo in the page doesn't add all that much while removing it reduces the need for external URL calls that could cause problems and slow things down.

Saturday August 17, 2019

LSP Client demo - (ba)sh language server

Below is a scenario by Jan Lahoda, the creator of LSP integration for Apache NetBeans, for how to integrate the bash language server with Apache NetBeans, including syntax highlighting.

Setting Up

  1. Install npm (and node.js). On Ubuntu, e.g., do "apt install npm", though something different will be needed on Mac OS X.

  2. Create a directory in which we are going to work, have a terminal opened in that directory.

  3. Install the bash-language-server:

    npm install bash-language-server

    On Mac OSX:

    npm install bash-language-server --unsafe-perm

    This will install the server into the current directory.

  4. Try the bash server:

    ./node_modules/bash-language-server/bin/main.js --help

    You should see something like this:

      bash-language-server start
      bash-language-server -h | --help
      bash-language-server -v | --version
  5. Create a NetBeans module. Create a File Type (Module Development/File Type), mime type: text/sh, file extension: sh

Syntax Coloring via TextMate

  1. Download the TextMate grammar file here, and put it alongside the newly created DataObject:


  2. Add "TextMate Lexer" as a dependency of the module.

  3. Into the DataObject add this annotation:

    @GrammarRegistration(grammar="shell-unix-bash.tmLanguage.json", mimeType="text/sh")

    GrammarRegistration is:

    import org.netbeans.modules.textmate.lexer.api.GrammarRegistration;

This should lead to syntax highlighted source for .sh bash files taken from the TextMate grammar file.

Language Support via the Language Server

Next, we need to add language support using the language server.

  1. Add "LSP Client" and "MIME Lookup API" as dependencies of the module.

  2. Create a new class, ShellClient, in the module, put this into it, (replacing "<path-to-bash-language-server>" with the absolute path to "node_modules/bash-language-server"):

    import java.io.IOException;
    import org.netbeans.api.editor.mimelookup.MimeRegistration;
    import org.netbeans.modules.lsp.client.spi.LanguageServerProvider;
    import org.openide.util.Exceptions;
    import org.openide.util.Lookup;
    @MimeRegistration(mimeType="text/sh", service=LanguageServerProvider.class)
    public class ShellClient implements LanguageServerProvider {
       public LanguageServerDescription startServer(Lookup lkp) {
           try {
               Process p = new ProcessBuilder("<path-to-bash-language-server>/bin/main.js", "start").start();
               return LanguageServerDescription.create(p.getInputStream(), p.getOutputStream(), p);
           } catch (IOException ex) {
               return null;

    You may need to explicitly call node in the above code, i.e., as follows:

    Process p = new ProcessBuilder(
  3. Build and start the module.

Caveat: the language server is started only for files that are inside a project, so create (any) new project, and inside the project, put a shell file. E.g. copy "bin/netbeans" as "test.sh" into the project. Open it in the editor - there should be syntax highlighting, Navigator, and code completion should show something, etc.

Tuesday August 06, 2019

Why Does Apache NetBeans Need Its Own Parsers?

A question was asked on the Apache NetBeans mailing list: "I was just curious about the theoretical aspect of parsing. Isn't there a unified parsing API, using ANTLR/lex/yacc which can parse any language given a grammar for it? Why do we use a different parsing implementation (like the Graal JS parser in this instance) when a unified approach will help us support lots of languages easily?"

Tim Boudreau, involved in NetBeans from its earliest hours, responds, in the thread linked above:

First, in an IDE, you are *never* just "parsing". You are doing *a lot* with the results of the parse. An IDE doesn't have to just parse one file; it must also understand the context of the project that file lives in; how it relates to other files and those files interdependencies; multiple versions of languages; and the fact that the results of a parse do not map cleanly to a bunch of stuff an IDE would show you that would be useful. For example, say the caret is in a java method, and you want to find all other methods that call the one you're in and show the user a list of them. The amount of work that has to happen to answer that question is very, very large. To do that quickly enough to be useful, you need to do it ahead of time and have a bunch of indexing and caching software behind the scenes (all of which must be adapted to whatever the parser provides) so you can look it up when you need it. In short, a parser is kind of like a toilet seat by itself. You don't want to use it without a whole lot of plumbing attached to it.

Second, while there are tools like ANTLR (version 4 of which is awesome, by the way), there is still a lot of code you have to write to interact with the results of a parse to do something useful beyond syntax coloring in an IDE. One of my side projects is tooling for NetBeans that *do* let you take an ANTLR grammar and auto generate a lot of the features a language plugin should have. Even with that almost completely declarative, you wind up needing a lot of code. One of the languages I'm testing it with is a simple language called YASL which lets you define javascript-like schemas with validation constraints (e.g., this field is a string, but it must be at least 7 characters and match this pattern; this is an integer number but it must be > 1 and less than 1000 - that sort of thing). All the parsing goodness in the world won't write hints that notice that, say, the maximum is less than the minimum in an integer constraint and offer to swap them. Someone has to write that by hand.

Third, in an IDE with a 20 year history, a lot of parser generating technologies have come and gone - javacc, javacup, ANTLR, and good old hand-written lexers and parsers. Unifying them all would be an enormous amount of work, would break a lot of code that works just fine, and the end result would be - stuff we've already got, that already works, just with one-parser-generator-to-rule-them-all underneath. Other than prettiness, I don't know what problem that solves.

So, all of this is to say: We use different parsing implementations because parsing is just a tiny piece of supporting a language, so it wouldn't make the hard parts easier enough to be worth it. And there will be new cool parser-generating technologies that come along, and it's good to be able to use them, rather than be married to one-parser-generator-to-rule-them-all and have this conversation again, when they come along.



Hot Blogs (today's hits)

Tag Cloud