Moving from AppEngine and taking your data with you

This article appeared first at

AppEngine and its companion Datastore have been of great value for our application. Nevertheless, time has come to move to another cloud, which also means to face the annoying fact that all our data is stored within a closed, isolated, black box within Googles infrastructure and accessible only through Googles proprietary AppEngine API.

This article explains how we managed to download all data from Datastore and prepared it for another cloud.

Our requirements

We considered 4 major requirements:

  1. Preserve the data as is (which means losing no information due to data representation issues).
  2. Preserve document oriented and non-relational nature of data (which means that data will not easily fit into tables).
  3. Preserve parent-child relationship between entities (which means not moving related entities away).
  4. Support very large amount of data (which means it won’t fit into CSV files).
  5. Allow to analyze data before importing it to the new cloud (which means to enable doing some basic data engineering tasks).

Possible approaches

There are 3 major approaches discussed on Internet to extract data from Datastore:

  1. Import data to Google BigQuery.
  2. Offer an “export” feature within the application.
  3. Export a backup to Google Storage.

Our investigation on each approach

Moving data to BigQuery sounds cheerful, but we would still be stuck within Google infrastructure. And it will not guarantee to preserve the original data due to forced format conversions and limitations to represent entity relationship.

We do not consider the implementation of an export feature as straightforward task. It would be time consuming do provide a correct implementation able to handle all scenarios of entities. These implementation would be difficult to test, to monitor and to validate, as it will run only inside AppEngine.

There are several opensource implementations of libraries that provide generic features to export data from within the application. Many of them have not been maintained anymore for several years. They would require certain amount of understanding and reverse engineering to assure they fit our purpose.

Google Cloud console states that the backup by Datastore console is encoded as standard LevelDB. But no LevelDB implementation was able to read it. But people reported that these files are encoded as Google Protocol Buffers and that there have been legacy libraries able to read these files.

There is only rare information about what is contained within the output files. And there are not official libraries that claim to understand them. Fortunately, there is previous work reported at Stackoverflow. This project at Github from Venryx demonstrates how to use Python legacy libraries from Appengine to read the files, converting Protocol Buffer streams into entities represented as Python objects (mostly dictionaries with additional Datastore metadata).

Our strategy

We decided to explore the former approach: to export a backup to Google Storage and convert this backup to a understandable format.

It has the convenience of breaking the problem into three smaller, independent ones:

  1. Export a backup from Datastore to Google Storage.
  2. Transfer the backup from Google Storage to a storage outside Google infrastructure.
  3. Convert the proprietary representation into an understandable representation;
  4. Import the data into the new cloud.

Step a) and b) need to be executed only once and are straightforward. And we incur the expensive Datastore and Storage billing only once. As result, we get a ‘local’ copy we can investigate and reverse engineer step by step, without recurring to Google infrastructure again.

By converting the data to the understandable representation, we were able to analyze the data and to decide the best strategy to import it into the destination cloud. We were also able to validate the converted data before assuming the burden of importing it into the new cloud.

For the understandable representation, we decided to use JSON. While this representation is not as compact and as efficient. But it is a document oriented solutions. Tt allows to represent the hierarchical nature of entities. And it allows inspection by well-known open source tools. Therefore, we consider it a perfect fit for our requirements.

Our further investigation

Datastore does export the data into many (maybe tens of thousands of) files called “output-i”, which seem to be about 1-10MB. Probably, the size of each file will depend on the number of entities stored within the file and the size of each entity. We suppose that each file contains the result of a short time map-reduce background job, which seems to be the solution used by Datastore export feature.

Our investigation also revealed that, apparently, entities within the same output file are potentially parent-child related. At least from our data, we could not observe related entities spread into multiple output files. And root entities within the output file are potentially of the same kind. However, child entities are potentially of different kind, depending on how data was modeled.

Based on these considerations, we felt safe to decode each file separately. This will allow us to run the decoding in map-reduce style: convert all output files in parallel and merge the results from the same root entities, which gracefully satisfied the challenge to support very large amount of data.

Our first approach

We started with this project at Github from Venryx, which proposes to convert the backup files into a single json file. By following the minimal instruction, we were able to convert the backup into JSON, but the implementation has many drawbacks:

  1. It produces a single and enormous JSON file.
  2. It loads all entities into memory before writing them to the file.

Therefore, this approach is not suitable for very large amount of data. There might not be enough resources to load everything into memory. If something fails, the entire conversion fails without any intermediary result to resume after fixing the failure. It does not allow processing the files in parallel processes (targeting a map reduce approach). It is nearly impossible to load, analyze and validate the enormous JSON file.

The JSON contains a dictionary where each key is the entity kind. And the value is an array of entities of the respective kind.

Further, we noticed that:

  1. Encoding of string fields are UTF-8, but with apparent escape codes written as ascii.
  2. Unable to decode fields that contain embedded entities, references (keys) to other entities, binary data and many other types.
  3. All entities are handled as root entities.

Our improved approach

We changed Venryx’s implementation to handle one backup file at time instead of all files. And to produce a JSON file for each backup file. This allows running the conversion in parallel and to restart the conversions that failed. The JSON files are now small.

We also changed the implementation to write JSON preserve the parent child relation from the Datastore. The JSON now contains a dictionary where each key is the root entity kind. And the value is an array of entities of the respective kind. Within each entity, we add an additional dictionary to map child entities. Again, each key is the entity kind and the value is an array of entities of the respective kind. This approach repeats recursively.

We added additional handling to fields of no trivial type. The Protocol Buffer approach used by Datastore does not distinguish composed types or binary types. They are all handled ascii strings. One must know the data model to decide if a string field is really a string, a binary value or an encoded value (for example, embedded entity). Therefore, it will not be possible to create a generic approach. The code needs to inspect each entity and decide on its kind and fields, how to decode each of them.

To decode references to other entities, the API will return a field of type “Key”. Therefore, we iterate over all field and convert those of type Key. Remember that Datastore may use and ID or a Name to identify the entity.

def keyToJson(key):
  ds_kind = key.kind()
  ds_name =
  ds_id =
  if ds_name:
    return { 'kind': ds_kind, 'name': ds_name }
  if ds_id:
    return { 'kind': ds_kind, 'id': ds_id }

def decodeReference(entity_dict):
  for k in entity_dict:
    v = entity_dict[k]
    if v and type(v) is datastore_types.Key:
      entity_dict[k] = keyToJson(v)

To decode binary types. Remember that fields may be present, but may be None. In this example, the JSON will display the binary data as base64 encoded. Any other encoding may will be fine, except true binary encoding, as the respective ascii characters are not supported by the JSON writer. Note that your method needs to know the model and look for every pair of kind/field. This example decodes the salted/hashed password, that is stored as a binary field.

def decodeBinary(entity_kind, entity_dict):
  if entity_kind == 'User' and 'password' in entity_dict:
    v = entity_dict['password']
    if v:
      entity_dict['password'] = v.encode('base64').replace('\n', '')

One must implement many other decoders for each possible binary/composed type that is not supported by the Protocol Buffer model of Datastore. You will find all embedded/binary types at google.appengine.api.datastore_types (from google.appengine.api import datastore_types).

To decode embedded types, we approached similarly to binary types. We need to consider that the string contains binary data, that is the Protocol Buffer representation of the embedded entity. It took some hours to discover how to create an embedded entity from this string, but it is quite simple. We convert the entity to dict in order to remove all metadata from Datastore. This example decodes home address, that is stored as an embedded entity of the Client entity.

def decodeEmbedded(entity_kind, entity_dict):
  if entity_kind == 'Client' and 'address' in entity_dict:
    v = entity_dict['address']
    if v:
      entity = datastore.Entity.FromPb(v)
      entity_dict['address'] = dict(entity)

To fix the UTF-8 encoding issues, we changed the JSON dump to:

out =, 'w', encoding='utf8')
out.write(json.dumps(value, default=JsonSerializeFunc, ensure_ascii=False, indent=2))

The resulting main loop:

# For a given entity key, returns the dictionary where to add fields of the entity.
# It creates all nodes on the tree according to the path described by the key.
# For example, a root entity will be mapped to jsonTree/kind/id_or_name
# A child entity will be mapped to jsonTree/root_kind/root_id_or_name/kind/id_or_name
# and so on.
def get_dest_dict(key, jsonTree):
  parent = key.parent()
  if parent is None:
    kind = key.kind()
    id_or_name = key.id_or_name()
    if kind not in jsonTree:
      jsonTree[kind] = {}
    if id_or_name not in jsonTree[kind]:
      jsonTree[kind][id_or_name] = {}
    return jsonTree[kind][id_or_name]
    jsonTree2 = get_dest_dict(key.parent(), jsonTree)
    kind = key.kind()
    id_or_name = key.id_or_name()
    if kind not in jsonTree2:
      jsonTree2[kind] = {}
    if id_or_name not in jsonTree2[kind]:
      jsonTree2[kind][id_or_name] = {}
    return jsonTree2[kind][id_or_name]

def Start():
  jsonTree = {}
  files = sorted(os.listdir(sourceDir))
  # or a list of files given by command line arguments

  for filename in files:
    if not filename.startswith("output-"): continue
    print("Reading from:" + filename)
    inPath = os.path.join(sourceDir, filename)
    raw = open(inPath, 'rb')
    reader = records.RecordsReader(raw)
    for recordIndex, record in enumerate(reader):
      # Get the protocol buffer representation of the next entity
      entity_proto = entity_pb.EntityProto(contents=record)
      # Convert protocol buffer representation to entity
      entity =  datastore.Entity.FromPb(entity_proto)
      # Convert entity to dict, to remove Datastore metadata
      dict2 = dict(entity)
      # Do not convert unwanted entity
      if kind == 'UnwantedEntity': continue
      # Some specific conversion needed for User entity (just for example).
      if kind == 'User':
      # Decode fields of binary/composed types
      decodeBinary(kind, dict2)
      decodeEmbedded(kind, dict2)

      key = entity.key()
      dest_dict = get_dest_dict(key, jsonTree)
    outFilePath = os.path.join(destDir, filename+'.json')
    out =, 'w', encoding='utf8')
    out.write(json.dumps(value, default=JsonSerializeFunc, ensure_ascii=False, indent=2))
    print("JSON file written to: " + outFilePath)
    jsonTree = {}

Using the gradle-jnlp-plugin – part 2

This collection of articles describe the gradle-jnlp-plugin from Tobias Schulte, hosted at Github.

The plugin produces a webstart distribution for a JavaSE application as required by the webstart specification. It creates the jnlp file and compresses/signs the jar files. The directory created by the plugin may then be uploaded to your static web server or embedded into your war file.

Describe jar signing process

Following options are available within the jnlp and describe the certificate that will sign the jar files.

signJarParams A map of key/value pairs that describe the certificate that signs the jar files.
Optional. If omitted, or if the map is empty, no jar file becomes signed.
The plugin uses the Ant signjar task. Any of the options described at signjar documentation may be used as entry in this map. The relevant attributes are:
keystore Path of file that contains the keystore with the certificate.
storepass Password to decrypt the keystore file.
alias Alias that identifies the certificate within the keystore file.
keypass Password to decrypt the certificate within the keystore file.
Optional. If omitted, defaults to the value passed to storepass.
tsaurl URL of webservice that provides timestamp authority.
Optional. If omitted, the jar file signature will expire together with the certificate.
signJarAddedManifestEntries A map of key/value pairs to be added as properties to each manifest in jars distributed by the application.
Optional. Defaults to [ 'Codebase': '*', 'Permissions' : 'all-permissions', 'Application-Name': "${}" ].
signJarFilteredMetaInfFiles Optional. Defaults to '(?:SIG-.*|.*[.](?:DSA|SF|RSA)|INDEX.LIST)'
signJarRemovedManifestEntries A regular expression that matches names of properties that are to be removed from each manifest in jars distributed by the application.
Optional. Defaults to '(?:Trusted-Only|Trusted-Library)'
signJarRemovedNamedManifestEntries A regular expression that matches names of properties that are to be removed from each manifest in jars distributed by the application.
Optional. Defaults to '(?:.*-Digest)'.

The default values for signJarAddedManifestEntries and signJarRemovedManifestEntries are the most permissive (less secure) configuration available at the webstart specification. It allows the jar files to access user’s system resources (like reading and writing local files). I allows reusing the jar files on other domains and other applications.

These two permissions should be fine for most common webstart applications. Recent webstart environment requires jar files, even for dependencies, to declare required permissions to validate successfully while loading application. I would only change their values if sandboxing your application is really a requirement or if someone must not reuse any of your jar files. If so, see Preventing RIAs from Being Repurposed.

Further, I experienced some rare situations where invalid manifest entries from thard party jar files were not accepted by the target webstart JVM. I had to remove these entries using signJarRemovedManifestEntries.

The default values for signJarFilteredMetaInfFiles and signJarRemovedNamedManifestEntries remove any existing signature from third party jar files. Such signatures (or self-signatures) may not be recognized by the target webstart JVM, preventing your application from launching. This default options will conservatively replace any existing signature by a signature from your own certificate.

This should be fine if you own a valid certificate, unless licensing issues would prevent you from redistributing third party jar files using your own certificate.

You may provide the signing options within one line:

jnlp {
   signJarParams = [keystore: '../keystore.ks', alias: 'myalias', storepass: 'mystorepass']

Or as individual attributes:

jnlp {
   signJarParams. keystore = '../keystore.ks'
   signJarParams. alias= 'myalias'
   signJarParams. storepass = 'mystorepass'

Describe branding

A user will expect to recognize your application on several branding artifacts that identify your application. Typically, such artifacts are application name, splash screen and icons that appear on the OS’s taskbar/desktop/main menu.

Such artifacts are defined within the jnlp file. Instead of providing a large set of options for this purpose, the gradle-jnlp-plugin maintainer preferred to let you write own XML snippet to be placed within the jnlp file. The XML elements are documented at Structure of the JNLP File – Information element.

Such XML snippet is described as groovy script as documented at Processing XML – Creating XML. For simplicity, I present an example that will cover most relevant branding artifacts.

jnlp {
    withXml {
       information {
          /* Application name that is shown while downloading the application,
           * on webstart security dialogs, on OS's desktop icon and 
           * OS's main menu entry. */

          /* Company name and homepage that is shown while downloading the 
           * application, on webstart security dialogs. */
          vendor ?:
          homepage (href: '')

          /* Several text versions that explain the application purpose. */
          description (kind: 'one-line', 'One line description.')
          description (kind: 'short', 'More than one line description.')
          description (kind: 'tool-tip', 'Tooltip description.')

          /* Application icon, of several sizes. The target JVM will choose the
           * one that best fits for each situation. This example assumes 
           * 3 versions of the same icone with sizes: 16x16, 32x32 and 
           * 48x48 pixels. The icon is described by a href relative to the 
           * JNLP file. If the jnlp is hosted at,
           * the first icon will be downloaded from 
           * All icons should be png files. */
          icon (href: 'images/main-16.png')
          icon (href: 'images/main-32.png')
          icon (href: 'images/main-48.png')

          /* Image for application splash screen, presented while downloading
           * and verifying the jar files. */
          icon (href: 'images/splash.png', kind: 'splash')

          shortcut {
             /* Add shortcut with icon to OS's desktop. */
             /* Add shortcut with icon to submenu inside the OS's main menu. */
             menu (submenu: 'submenu-name')
       /* Through the jar files declare permissions, these permissions
        * must be declared again in the JNLP file. */
       security {

Using the gradle-jnlp-plugin – part 1

This collection of articles describe the gradle-jnlp-plugin from Tobias Schulte, hosted at Github.

The plugin produces a webstart distribution for a JavaSE application as required by the webstart specification. It creates the jnlp file and compresses/signs the jar files. The directory created by the plugin may then be uploaded to your static web server or embedded into your war file.

Applying the plug-in

The gradle-jnlp-plugin is hosted in JCenter and is registered in the Gradle Plug-in Portal. It is also recommended to apply the application plugin.

buildscript {
   repositories {
   dependencies {
      classpath 'de.gliderpilot.gradle.jnlp:gradle-jnlp-plugin:+'
apply plugin: 'application'
apply plugin: 'de.gliderpilot.jnlp'

Relevant tasks

The gradle-jnlp-plugin provides three relevant tasks.

createWebstartDir Bundles the project as a webstart distribution. Creates the jnlp file and compresses/signs the jar files. The directory created by this tasks may be uploaded to your static web server or embedded into your war file.
webstartDistTar Bundles the webstart distribution within a .tar file.
webstartDistZip Bundles the webstart distribution within a .zip file.


A jnlp extension is available for the project in order to describe the webstart distribution. The default values are reasonable most webstart requirement.

The gradle-jnlp-plugin assumes the main class declared for the application plugin. For a production ready application, you need to declare certificate to sign the jar files.

mainClassName = ''
jnlp {
   signJarParams.keystore = 'keystore.ks'
   signJarParams.alias = 'myalias'
   signJarParams.storepass = 'mystorepass'

Describe the distribution

Following options are available within the jnlp extension with the purpose to describe the webstart distribution.

href File name and file extension of the jnlp file created.
Optional. Defaults to “launch.jnlp”
codebase Base URL where the application is hosted. All relative URLs specified in href attributes in the jnlp file are using this URL as a base.
Optional. When the application is distributed within a war file using some kind of jnlp servlet, the codebase should be “$$codebase”.
spec The webstart specification required by the distribution.
Optional. Defaults to “7.0”.
mainClassName Fully qualified name of the class containing the main method.
Optional. Defaults to project.mainClassName, as defined by the application plugin.

The available options are exactly the attribute names for the jnlp element within the jnlp file, as described by the jnlp syntax specification.

I recommend changing only, if needed, the href option. The users may prefer reading something like application_name.jnlp instead of launch.jnlp. All other options work well with their default value and there is no relevant reason to change them.


jnlp {
   href 'myapplication.jnlp'
   codebase ''
   spec '7.0'

Describe the JVM

Following options are available within the jnlp extension with the purpose to describe minimal JVM requirements.

j2seParams A map of key/value pairs that describe the JVM required to execute the application.
Optional. Defaults to [version: current-JVM-version]
Possible key names are:
version Ordered list of version ranges to use. For example, “1.7+” means that your application requise Java 7 or higher.
href The URL denoting the supplier of this version of java, and where it may be downloaded from.
java-vm-args Indicates an additional set of standard and non-standard virtual machine arguments that the application would prefer the JNLP Client to use when launching Java.
initial-heap-size Indicates the initial size of the Java heap.
max-heap-size Indicates the maximum size of the Java heap.

The available keys are exactly the attribute names for the java (or j2se) element within the jnlp file, as described by the jnlp syntax specification.

You may provide the JVM options within one line:

jnlp {
   j2seParams = [version: '7.0+', 'max-heap-size': '256m']

Or as individual attributes, which I prefer for better readability:

jnlp {
   j2seParams.version = '7.0+'
   j2seParams.'max-heap-size' = '256m'

I recommend changing only, if your application crashes without OutOfMemoryExceptions, the j2seParams.initial-heap-size/j2seParams.'max-heap-size' options. As far as I understand this plugin, if you set any j2seParams option, you also need to set the j2seParams.version manually.

Further customization options will be explained on my next article about the gradle-jnlp-plugin.

Choosing a Gradle Java Webstart/JNLP Plugin

While developing a JavaFX application distributed with Webstart, I had to choose a Gradle plugin in order to package the application. Unfortunately, Gradle does not support Webstart out of the box. After experiencing several available plugins, I decided for the gradle-jnlp-plugin from Tobias Schulte, hosted at Github.

His gradle-jnlp-plugin creates a webstart distribution for a JavaSE application as required by the webstart specification. It creates the jnlp file and compresses/signs the jar files. The directory created by the plugin may then be uploaded to your static web server or embedded into your war file.

I discovered that the gradle-jnlp-plugin is stable and functional. While simple, it supports nearly the complete webstart specification. The default configuration revealed itself reasonable for most requirements. And the maintainer has been reacting to recent issues. That gave me confidence that the plugin won’t become abandoned as happened to many other similar plugins.

Signing jnlp is not supported. Fortunately, this is rarely required on recent webstart specification. And there is no support for war distribution containing the jnlp servlet. I suppose that the maintainer does not recommend this approach, since the recent webstart specification made it simpler to launch the application without the jnlp servlet.

Unfortunately, besides some examples, there is nearly no documentation. Having a good understanding about the webstart specification and after reading the plugin’s source code, I discovered that the gradle-jnlp-plugin is quite intuitive. Therefore, I decided to write another post to help other people which are new to gradle or webstart.

On the whole, I am grateful to Tobias Shulte for spending his time making this useful plugin freely available to the community. I was relevant help while developing my JavaFX webstart application.

Datanucleus fails for Google AppEngine on Netbeans 8

This article explain how to prevent Datanucleus Enhancer failure when developing for Google AppEngine on Netbeans 8.

This article originally appeared on

The cause

The Google AppEngine plugin for Netbeans from Gaelyk does not work with Google AppEngine Java SDK 40 and later.

The workaroud

While developing, use AppEngine Java SDK 37 (latest available version thar worked for me on Netbeans). You may download it from Maven Repository (see this link), as it is no longer available at Google Appengine site. When building the release version, compile against the latest AppEngine Java SDK.

Make sure that you compile against AppEngine Java SDK 37 and that the “Server” tab contains a server instance running on the same SDK, or the issues will get even worse.

The explanation

The latest (and very old) release AppEngine plugin for Netbeans 7.4, was last updated by Gaelyk at December 2013. After you add the local server to the “Server” tab, regardless if Datanucleus enhancement is turned on or not, the AppEngine plugin copies a set Datanucleus jars into your build directory, overwriting the correct ones, or adding jars you did not want if you did not need such dependencies. This jars do not work with the most recent Google AppEngine SDK or do result in classloader conflits.

There is no use trying to change build-impl.xml or ant-deploy.xml scripts within yout project. They contain an evident incorrect condition to copy the wrong or unwanted Datanucleus jars. But the AppEngine plugin does not run this scripts, it runs a similar hard-coded script that is copied int a temporary directory each time you run your local server.

The Symptoms

On the Netbeans “Output” tab:

Buildfile: C:\Users\Daniel\AppData\Local\Temp\build_appenginepluginutils_runanttarget_Server.xml



     [copy] Copying 1 file to G:\cosmetopeia-release\Server\build\web\WEB-INF\lib

  [enhance] Encountered a problem: Unexpected exception
  [enhance] Please see the logs [C:\Users\Daniel\AppData\Local\Temp\enhance5567238914045160848.log] for further information.

On the log file:

java.lang.RuntimeException: Unexpected exception
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    at java.lang.reflect.Method.invoke(
    ... 2 more
Caused by: java.lang.NoSuchMethodError: org.datanucleus.plugin.PluginManager.<init>(Lorg/datanucleus/PersistenceConfiguration;Lorg/datanucleus/ClassLoaderResolver;)V
    at org.datanucleus.OMFContext.<init>(
    at org.datanucleus.enhancer.DataNucleusEnhancer.<init>(
    at org.datanucleus.enhancer.DataNucleusEnhancer.<init>(
    at org.datanucleus.enhancer.DataNucleusEnhancer.main(
    ... 7 more


Unit tests for Objectify entities and DAOs

This article explains how to write unit tests (with junit) for Objectify entities and DAOs.

Google Cloud Platform describes how to write unit tests for Google DataStore. A small adaptation enables the same solution for Objectify.

The documentation suggest creating a LocalServiceTestHelper to be initialized in a @Before/setUp() method and disposed in a @After/tearDown() method.

After calling helper.setUp(), initialize the objectify framework by calling ObjectifyService.begin(). Store the returned closable so you can dispose Objectify at the end of your test.

import com.googlecode.objectify.ObjectifyService;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

public class EventDataStoreTest {

    private final LocalServiceTestHelper helper
= new LocalServiceTestHelper(new LocalDatastoreServiceTestConfig());
    private com.googlecode.objectify.util.Closeable closeable;

    public EventDataStoreTest() {

    public void setUp() {
        closeable = ObjectifyService.begin();

    public void tearDown() {

    public void testSaveCriterioBuscado() {
new EventEntity("a", "b", "c", "d"));

Displaying PDF documents on Netbeans RCP

This article explains how to display PDF documents on Netbeans RCP, using JasperReports libraries.

This article appeared first at

The proposed solution uses a TopComponent instance in editor mode to display each PDF document. It assumes that there is a JasperPrint object that represents a page-oriented document that can be viewed, printed or exported to other formats. The TopComponent will use a JRViewer to render the PDF document. The JRViewer handles scrolling, zooming, paging and even printing and saving as PDF, Excel and other formats.

Required libraries

Your module (or some depended module) requires to import JasperReport and POI libraries. Following JARs worked for me and were downloaded from Maven Central Repository: jasperreports-6.0.0.jar, commons-codec-1.5.jar, dom4j-1.6.1.jar, stax-api-1.0.1.jar, poi-ooxml-schemas-3.10-FINAL-20140208.jar, poi-scratchpad-3.10-FINAL-20140208.jar, poi-3.10-FINAL-20140208.jar.

Create an empty TopComponent

Create a TopComponent named “JasperViewerTopComponent”. Add no content to it. Change annotation as shown above:

    preferredID = &quot;JasperViewerTopComponent&quot;,
    iconBase = &quot;br/.../libs/jasper/JasperViewerTopComponent&quot;,
    persistenceType = TopComponent.PERSISTENCE_NEVER
@TopComponent.Registration(mode = &quot;editor&quot;, openAtStartup = false)

Some comments about the annotations:

  • Remove @ActionID, @ActionReference and @TopComponent.OpenActionRegistration annotations that are generated automatically by the IDE. This annotations would create a menu entry in the “Window” menu to open a singleton TopComponent, that means, there would be only one TopComponent to display one PDF file at time.
  • Also remove @ConvertAsProperties annotation and edit @TopComponent.Description to set persistenceType to TopComponent.PERSISTENCE_NEVER. The original persistence type works only for singleton TopComponents.
  • Also remove @Messages annotation, as TopComponent title and tooltip will be set manually.
  • @TopComponent.Registration(mode = “editor”, openAtStartup = false). A TopComponent of mode “editor” is arranged centrally, surrounded by other non-editor TopComponents. This is usually the best alternative.

Change constructor

Replace the empty argument constructor by following constructor:

public JasperViewerTopComponent(String title, JasperPrint jasperPrint) {
  this.jasperPrint = jasperPrint;
// Import net.sf.jasperreports.view.JRViewer
// Won't work with net.sf.jasperreports.swing.JRViewer
  JRViewer viewer = new JRViewer(jasperPrint);
  JRSaveContributor[] contrbs = viewer.getSaveContributors();
// Remove save-as features you don't want.
// This example keeps only save as PDF and Excel options.
  for (JRSaveContributor saveContributor : contrbs) {
    if (!(saveContributor instanceof JRSingleSheetXlsSaveContributor
     || saveContributor instanceof JRPdfSaveContributor)) {
 add(viewer, BorderLayout.CENTER);

Some comments about the constructor:

  • There are two classes named JRViewer. Make sure to use net.sf.jasperreports.view.JRViewer!
  • JRViewer will take care of scrolling, zooming, paging and even printing and saving as PDF, Excel and other formats
  • It is simpler to create the JRViewer within the constructor instead of within the initComponents() method. As initComponents() is maintained by Matisse, you would have to use an attribute to pass the jasperPrint instance to JRViewer constructor.

Create utility method to open PDF files

public static void openReport(final String title, final JasperPrint jasperprint) {
  SwingUtilities.invokeLater(new Runnable() {
    public void run() {
      JasperViewerTopComponent tc = new JasperViewerTopComponent(title, jasperprint);;

That’s it.

Update licence header without plugin

This article explains how to update license header for your java source files without any tool. It works for Notepad++, Netbeans and Eclipse and does not require installing any additional plugin.

This article appeared first at

Option 1: Using Notepad++

Open Notepad++ and navigate to Search->Find ind Files. Fill the dialog as shown below.


Find what: \A(.*?^package){1}
Replace with: /\*\r\n \* (line 1)\r\n \* (line 2).\r\n \*/\r\npackage
Directory: The root directory of your source code.
Filters: *.java
Activate “Regular expression” and “. maches newline”.

When using non-English license text, use UTF8 escaping for non-ascii chars. For example, use “\xE3” instead of “ã” (more examples)

Pros: Does not require any IDE.
Cons: Does not work properly for non-English license headers. Unfortunately, Notepad++ does not allow setting the encoding for the affected files.

Options 2: Using Netbeans IDE

Open the java project with Netbeans IDE, select it and navigate to Edit->Replace in Projects.

Fill the dialog as shown below.


Containing text: (?s)\A(.*?^package){1}
Match: Regular expression
Replace with: /\*\r\n \* (line 1)\r\n \* (line 2).\r\n \*/\r\npackage
Scope: Your java project or browser the source root directory
File name patterns: *.java

Click on continue. A new window opens listing all affected files. Confirm.

Pros: Works better than Netbeans License Changer plugin. Supports non-English languages and respects your encoding.

Cons: Replaces at most 500 files at time. For large projects, you are required do repeat this recipe on each subdirectory.

Options 3: Using Eclipse IDE

Open the java project with Eclipse IDE, select it and navigate to Search->File.

Fill the dialog as shown below.


Containing text: (?s)\A(.*?^package){1}
Activate “Regular expression”
File name patterns: *.java

Click “Replace”. A new windows opens listing all affected files and a new dialog asks for the new text: /\*\r\n \* (line 1)\r\n \* (line 2).\r\n \*/\r\npackage


Click “Preview” to see how files are going to be changes. Or click “OK”.

Pros: Supports non-English languages and respects your encoding. Updates file only if license header is already correct.

Cons: not known yet.

Comparing your Oracle databases

This article suggests an approach to compare your development, test and production databases using SQL scripts executed from a Servlet.

This article appeared first on

Create a Servlet that produces following SQL statements as a text report: (replace **** with a proper value applicable to your database).


For security reasons, you may prefer to output them into the application log.

You may request these reports on each of your environments and compare them side-by-side, for example, using Beyond Compare.

Netbeans RCP: increasing JVM memory settings

This article explains how to increase memory limits for your Netbeans RCP application when using Ant to create a ZIP package or a Windows installer.

This article appeared originally at


Both ZIP package and Windows installer contain an executable that reads JVM and bootstrap settings from the etc/application_name.conf file. This file is copied from the laucher configuration template at netbeans-installation\harness\etc\app.conf. The default options contains all command line parameters passed to the executable. JVM parameters are prefixed by “-J”.

By default, this file has very tight memory settings (heap from 24 upto 64MB):

default_options="--branding ${branding.token} -J-Xms24m -J-Xmx64m"

Solution 1: Edit app.conf

Open netbeans-installation\harness\etc\app.conf and increase the -J-Xms and -J-Xmx parameters. However, this may require administrator priviledges if Netbeans IDE was installed into the application directory. The change will affect all application you build on the machine and will not get into your version control. You need to rememberto edit this file again on a fresh install.

Solution 2: Create your own app.conf

Create your own app.conf file within your application suite directory. Mention this file using the app.conf property within the nbproject/ file of your main application suite project.

# Main-Suite\nbproject\

My custom.conf looks like the original app.conf:

# Main-Suite\nbproject\custom.conf:
# ${HOME} will be replaced by user home directory according to platform
default_mac_userdir="${HOME}/Library/Application Support/${APPNAME}/${buildnumber}"

# options used by the launcher by default, can be overridden by explicit
# command line switches
default_options="--branding ${branding.token} -J-Xms128m -J-Xmx512m"

As the custom.conf resides inside the project configuration directory, it will be versioned by source control and used by every distribution build.