Better Spring management of Javascript resources with Bakehouse

Bakehouse is a new project I’ve put together in an attempt to help with the workflow of writing web-apps with Spring.

Spring itself does a bunch of stuff right.  Specifically, with the new @Profile support introduced in 3.1, it’s become very easy to generate a single WAR that can be distributed in all environments (Dev, QA, Production, etc).

This is great server-side.  However, as more and more code is moved to the client-side, different frameworks and practices are emerging that require pre-processing of resources.

For example, on a typical app you might:

  • Be compiling your CSS with LessCSS or SASS
  • Concatenate and Minify your JSS when moving into production
  • Use a javascript abstraction such as Typescript or Coffeescript that require compiling
  • Use local versions of 3rd party libs (eg., JQuery) during development, but the CDN hosted version in production

Typically, these tasks require different compilation options, depending on where you’re deploying to.   This is at odds with Spring’s goal of a single distributable.

Additionally, depending on your toolset, as you’re coding you’ll need to recompile your assets down to javascript, to use them.  This adds friction to the development process.

That’s where Bakehouse comes in.

Bakehouse allows you to define the resources in your jsp, and provide different sets of processors depending on the active profile.

For example:

<%@ taglib prefix="bakehouse" uri="http://www.mangofactory.com/bakehouse" %>
<head>
    <bakehouse:resource src="angular.js" cdn="http://ajax.googleapis.com/ajax/libs/angularjs/1.0.3/angular.min.js"/>
    <bakehouse:resources configuration="javascript" type="text/javascript">
        <bakehouse:resource src="file1.js"/>
        <bakehouse:resource src="file2.js"/>
    </bakehouse:resources>
    <bakehouse:resources configuration="typescript" type="text/javascript">
        <bakehouse:resource src="typescript.ts"/>
    </bakehouse:resources>
</head>

This defines the resources that our page will use.  Next, wire up a configuration that tells the page how to process them.  The configuration itself can vary between profiles.

@Configuration
@Profile("Production")
public class ExampleBakehouseConfig implements BakehouseConfigProvider {

    @Override @Bean
    public BakehouseConfig build(BakehouseConfigBuilder builder) {
            return builder
                .process("javascript").serveAsSingleFile("AppCode.js")
                .process("typescript").with(new TypescriptProcessor("TypescriptCode.js"))
                .serveResourcesFromCdn()
                .build();
        }
 }

Now, in Production, the following is generated:

<head>
    <script src='http://ajax.googleapis.com/ajax/libs/angularjs/1.0.3/angular.min.js' type='text/javascript'></script>
    <script src='/bakehouse-example/generated/AppCode.js' type='text/javascript'></script>
    <script src='/bakehouse-example/generated/TypescriptCode.js' type='text/javascript'></script>
</head>

However, perhaps in development, you’d rather use a local version of AngularJs (that hasn’t been minified), and you’d rather not have your javascript concatenated.

Simple – just define another @Profile:

@Configuration
@Profile("development")
public class ExampleBakehouseConfig implements BakehouseConfigProvider {

    @Override @Bean
    public BakehouseConfig build(BakehouseConfigBuilder builder) {
        return builder
            .process("typescript").with(new TypescriptProcessor("TypescriptCode.js"))
            .build();
    }
}

Now, the default processing is applied to javascript resources, (which means CDN’s are ignored), and only our Typescript is processed.

Bakehouse is intended to be extensible – making it easy to write your own processors. Currently, there’s only support for Javscript concatenation and Typescript processing. However,more is planned, and implementing your own is easy.

Reducing development friction

Bakehouse monitors resources declared, and recompiles them whenever the underlying source changes.  This removes the need to drop out of Eclipse to recompile your LessCSS, Coffescript or Typescript (for example).

The resulting files are cached, to ensure that they’re served speedily.

Getting started

Kicking things off is simple.  In your Spring config, just ensure that is enabled, and declare a BakehouseSupport bean:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xmlns:mvc="http://www.springframework.org/schema/mvc"
    xsi:schemaLocation="http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.1.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd">

    <context:annotation-config />
    <context:component-scan base-package="com.mangofactory.bakehouse.example" />
    <bean class="com.mangofactory.bakehouse.config.BakehouseSupport" />
</beans>

There’s a working example project available here.

Wanna see more?  Tell me!

Bakehouse is currently a POC project.  I’m using it internally on a project, but if you’re interested in seeing open source development continue, you’ll need to let me know.  Either star the project on Github, or leave a note on this issue.

Custom JSON views with Spring MVC and Jackson

The Problem

Spring MVC provides amazing out-of-the-box support for returning your domain model in JSON, using Jackson under the covers.

However, often you may find that you want to return different views of the data, depending on the method that is invoked.

For example, consider the following two methods:

List<Book> findBooks() {}
Book getBook(Long bookId); {}

It’s reasonable to say that from the findBooks method, you only want to return a summarized view of the books, and then allow users to fetch the full object graph for a specific book.

Unfortunately, SpringMVC doesn’t support this natively, as the type serialization is annotated directly on the entity class (Book).

Jackson has support for the concept using it’s ResponseView annotation, but there’s no way to hook that into Spring methods.

You would be required instead to generate different VO’s for serializing, which is ultimately just annoying boilerplate code.

Introducing @ResponseView

@ResponseView is a new tag that solves this issue, allowing you to register custom views on a per-method basis.

First up, let’s annotate our domain model with Jackson’s @JsonView annotation:

@Data
class Book extends BaseEntity
{
    @JsonView(SummaryView.class)
    private String title;
    @JsonView(SummaryView.class)
    private String author;
    private String review;

    public static interface SummaryView extends BaseView {}
}

@Data
public class BaseEntity
{
    @JsonView(BaseView.class)
    private Long id;    
}

public interface BaseView {}

Note that here we’ve defined public static interface as a marker, and annotated the properties we want returned from our summary method.  (Note, I’m using Lombok’s @Data annotation to keep this blogpost short.  It’s not a requirement of this approach)

Now, let’s define the annotation, and mark up our methods:

@Retention(RetentionPolicy.RUNTIME)
public @interface ResponseView {
    public Class<? extends BaseView> value();
}

@Controller
public class BookService
{
    @RequestMapping("/books")
    @ResponseView(SummaryView.class)
    public @ResponseBody List<Book> getBookSummaries() {}

    @RequestMapping("/books/{bookId}")
    public @ResponseBody Book getBook(@PathVariable("bookId") Long BookId) {}
}

This indicates that we want the response from getBookSummaries serialized using our SummaryView annotation.

Here’s the wiring to make it work:

/**
 * Decorator that detects a declared {@link ResponseView}, and 
 * injects support if required
 * @author martypitt
 *
 */
public class ViewInjectingReturnValueHandler implements
        HandlerMethodReturnValueHandler {

    private final HandlerMethodReturnValueHandler delegate;

    public ViewInjectingReturnValueHandler(HandlerMethodReturnValueHandler delegate)
    {
        this.delegate = delegate;
    }
    @Override
    public boolean supportsReturnType(MethodParameter returnType) {
        return delegate.supportsReturnType(returnType);
    }

    @Override
    public void handleReturnValue(Object returnValue,
            MethodParameter returnType, ModelAndViewContainer mavContainer,
            NativeWebRequest webRequest) throws Exception {

        Class<? extends BaseView> viewClass = getDeclaredViewClass(returnType);
        if (viewClass != null)
        {
            returnValue = wrapResult(returnValue,viewClass);    
        }

        delegate.handleReturnValue(returnValue, returnType, mavContainer, webRequest);
    }
    /**
     * Returns the view class declared on the method, if it exists.
     * Otherwise, returns null.
     * @param returnType
     * @return
     */
    private Class<? extends BaseView> getDeclaredViewClass(MethodParameter returnType) {
        ResponseView annotation = returnType.getMethodAnnotation(ResponseView.class);
        if (annotation != null)
        {
            return annotation.value();
        } else {
            return null;
        }
    }
    private Object wrapResult(Object result, Class<? extends BaseView> viewClass) {
        PojoView response = new PojoView(result, viewClass);
        return response;
    }
}
@Data
public class PojoView {
	private final Object pojo;
	private final Class<? extends BaseView> view;
	@Override
	public boolean hasView() {
		return true;
	}
}
/**
 * Adds support for Jackson's JsonView on methods
 * annotated with a {@link ResponseView} annotation
 * @author martypitt
 *
 */
public class ViewAwareJsonMessageConverter extends
        MappingJacksonHttpMessageConverter {

    public ViewAwareJsonMessageConverter()
    {
        super();
        setObjectMapper(JacksonConfiguration.newObjectMapper());
    }

    @Override
    protected void writeInternal(Object object, HttpOutputMessage outputMessage)
            throws IOException, HttpMessageNotWritableException {
        if (object instanceof DataView && ((DataView) object).hasView())
        {
            writeView((DataView) object, outputMessage);
        } else {
            super.writeInternal(object, outputMessage);
        }
    }
    protected void writeView(DataView view, HttpOutputMessage outputMessage)
            throws IOException, HttpMessageNotWritableException {
        JsonEncoding encoding = getJsonEncoding(outputMessage.getHeaders().getContentType());
        ObjectMapper mapper = getMapperForView(view.getView());
        JsonGenerator jsonGenerator =
                mapper.getJsonFactory().createJsonGenerator(outputMessage.getBody(), encoding);
        try {
            mapper.writeValue(jsonGenerator, view);
        }
        catch (IOException ex) {
            throw new HttpMessageNotWritableException("Could not write JSON: " + ex.getMessage(), ex);
        }
    }

    private ObjectMapper getMapperForView(Class<?> view) {
        ObjectMapper mapper = JacksonConfiguration.newObjectMapper();
        mapper.configure(SerializationConfig.Feature.DEFAULT_VIEW_INCLUSION, false);
        mapper.setSerializationConfig(mapper.getSerializationConfig().withView(view));
        return mapper;
    }

}

/**
 * Modified Spring 3.1's internal Return value handlers, and wires up a decorator
 * to add support for @JsonView
 * 
 * @author martypitt
 *
 */
@Slf4j
public class JsonViewSupportFactoryBean implements InitializingBean {

    @Autowired
    private RequestMappingHandlerAdapter adapter;
    @Override
    public void afterPropertiesSet() throws Exception {
        HandlerMethodReturnValueHandlerComposite returnValueHandlers = adapter.getReturnValueHandlers();
        List<HandlerMethodReturnValueHandler> handlers = Lists.newArrayList(returnValueHandlers.getHandlers());
        decorateHandlers(handlers);
        adapter.setReturnValueHandlers(handlers);
    }
    private void decorateHandlers(List<HandlerMethodReturnValueHandler> handlers) {
        for (HandlerMethodReturnValueHandler handler : handlers) {
            if (handler instanceof RequestResponseBodyMethodProcessor)
            {
                ViewInjectingReturnValueHandler decorator = new ViewInjectingReturnValueHandler(handler);
                int index = handlers.indexOf(handler);
                handlers.set(index, decorator);
                log.info("JsonView decorator support wired up");
                break;
            }
        }        
    }

}

Those two classes are responsible for the heavy lifting.   ViewInjectingReturnValueHandler identifies methods with our @ResponseView annotation, and wrap them so they can be formatted correctly.

JsonViewSupportFactoryBean modifies Spring’s internal wiring, wrapping the existing RequestResponseBodyMethodProcessor with our decorator.

From here, it’s just a matter of updating the Spring configuration:

    <mvc:annotation-driven>
        <mvc:message-converters>
            <bean class="com.mangofactory.concorde.api.ViewAwareJsonMessageConverter" />
        </mvc:message-converters>
    </mvc:annotation-driven>

That’s it.

Now, calls to any methods annotated with @ResponseView will have the appropriate json view rendered.

Update: Example available on Github

There’s a working example of this implementation now available on github, here.

Final thought:  Interfaces vs Classes

I’d recommend using Interfaces for declaring the view markers, as opposed to Classes as shown elsewhere.  Using Interfaces will allow you to build fairly fine-grained views, using multiple inheritance   As you never have to worry about the implementation details, there’s none of the normal multiple-inheritance nasties lurking.

Tagged ,

Spring Data Cross-Store support without AspectJ

Spring Data provides great Cross-store support between JPA and MongoDB.  However, one of the requirements is that you use AspectJ to compile your project.

I’ve been using Project Lombok heavily recently, which unfortunately is incompatible with AspectJ.

So, I’ve added a small project up on GitHub which provides Cross-Store support without the requirement of AspectJ.

It should be noted that the Spring Data version is better, so if you can use it – please do so.

There’s a getting started page over on Github that describes how to get going.

Please, take it for a spin, and let me know how you get on!

Tagged , ,

A new Parsley Extension: The DynamicServices tag

I just pushed a new Parsley extension project to GitHub – support for the <DynamicService /> tag.

Typically in a project, I like to structure my services layers pretty cleanly – for each service I would declare the following:

  • An interface which defines the services contract
  • A concrete implementation of the interface, which abstracts away the RemoteObject
  • A stub implementation of the interface, for testing when the services layer is unavailable

For example, a simple EchoService might look a little something like this:

// IEchoDelegate.as
public interface IEchoDelegate
{
     function echoMessage(source:String):AsyncToken;
}

// EchoDelegate.as
public class EchoDelegate
{
   [Inject]
   public function service:RemoteObject;

   public function echoMessage(source:String):AsyncToken
   {
      return service.echoMessage(source);
   }
}

Here, you can see the concrete delegate is really just boiler-plate code.  It’s pretty repetitive, and while I find lots of value in the pattern, the actual implementation can be a bore.

So, I put together a Parsley extension to generate these delegates on the fly.

Here’s an example:

<?xml version="1.0" encoding="utf-8"?>
<parsley:Objects>
   <fx:Declarations>
      <services:DynamicService type="{IEchoService}" endpoint="http://localhost:8080/testdrive/messagebroker/amf" destination="echoService" />
   </fx:Declarations>
</parsley:Objects>

This definition combines the Interface and the remote object defininition in the context. An implementation of the interface is generated on-the-fly at runtime, and is available to be injected into classes as required.

Eg:

// EchoCommand.as
	public class EchoCommand
	{
		[Inject]
		public var service:IEchoService;

		public function execute(message:EchoMessage):AsyncToken
		{
			return service.echo(message.source);
		}

		public function result(result:String):void
		{
			trace("Received from the server:" + result);
		}
	}

Source

The source for this is available now on github.  There’s also a demo project available here

A word of warning

Under the covers, this extension makes use of the ASCommons-Bytecode library to build an implementation of the interface.

Unfortunately, this dynamic goodness is not free, and incurs a one-off cost at startup when the ByteCode library parses the bytecode of the swf into memory.  As a result, you’ll notice that Parsley takes a little longer to intitialize than you might be used to.  This is a tradeoff that must be considered before using this approach.

Tagged ,

Returning to Australia

A short personal post in a blog otherwise filled with Flex & dpHibernate geekery.

After several years of working abroad in both USA and London, I’m returning to Australia, touching down in June 2011.

I’ll be looking for new consultancies to partner with, and new projects to work on.  If you’re at all interested, please drop me a line.

For reference, my current CV (That’s résumé for you North American folk) is available here

I look forward to hearing from you!

Flapper: An extension library for AOP with Parsley

I’ve just thrown a quick project up on Google Code for allowing simple AOP style programming with Parsley:

Flapper

The project aims to allow simple registration of AOP interceptors by registering a metatag with an interceptor.

It uses the awesome as3-commons bytecode library for proxy generation

Configuration

The aim is to have simple configuration.  You activate Flapper by simply declaring  AspectSupport in your parsley context:

        <parsley:ContextBuilder>
            <parsley:FlexConfig type="{ParsleyConfig}" />
            <tag:AspectSupport>
                <tag:MetatagAspectDefinition metatag="Log" aspect="{new LoggingAspect()}" />
            </tag:AspectSupport>
        </parsley:ContextBuilder>

In this example, I’m registering the Log metatag with my LoggingAspect.

Now, any class which has a method annotated with Log will be proxied, and the LoggingAspect will get invoked before the actual method itself.

Eg:

    public class Calculator
    {
        [Log]
        public function add(a:int,b:int):int
        {
            return a + b;
        }

    }

The aspects themselves are also extremely simple, implementing an Aspect interface.

Here’s my LoggingAspect:


    public class LoggingAspect implements Aspect
    {
        public function intercept(pointCut:ProceedingJoinPoint):void
        {
            trace(pointCut.targetMember.localName + " called with params: " + pointCut.arguments.join(","));
            pointCut.proceed();
            var returnValueString:String = pointCut.returnValue ? pointCut.returnValue.toString() : "null";
            trace(pointCut.targetMember.localName + " returned " + returnValueString );
        }
    }

This simply traces out the method name that was called, and the arguments that were passed in.

Usage

That’s all the configuration that’s required.  Now, any time I call calculator.add(), the aspect will trace out the call to the console window.

Note – classes that will be proxied must be decalred using Parsley’s Object tag, rather than as an instance.

Eg:

    <fx:Declarations>
        <!-- Don't do this -- it won't work -->
        <sample:Calculator />

        <!-- Instead, decalre using the Object tag -->
        <parsley:Object type="{Calculator}" />
    </fx:Declarations>

Usage from here is no different than with any other injected class.

Example:

            [Inject]
            public var calculator:Calculator;

            private function addNumbers():void
            {
                 // Results in a message being traced out "Called add with arguments: 2,3"
                calculator.add(2,3)
            }

The project and a demo are available to be checked out here. It’s still fairly early days, and the project hasn’t really been tested – but feel free to grab it and have a play.

Important Credit

This whole project took 2 nights to write.  The main reason it was so quick and simple  is because all the hard work has already been done by two awesome existing projects.

The as3-commons team have done an amazing job on the project, but also in providing thorough clear documentation.

Equally so, Jens Halm has done an outstanding job on Parsley, making the framework so easy to extend, with a solid API, and killer docs.  Not to mention he’s extremely responsive in the forums.  Both these projects rock.

Update:

I’ve updated this blog post to reflect the move to the Aspect interface, where previously I was using the ascommons-bytecode IInterceptor.  This has made chaining Aspects easier.  Note – I’m still using ascommons-bytecode underneath, but it’s now abstracted away a little more.

Tagged , ,

Simpler Pre/Post Update wiring in Spring & dpHibernate using @Component

dpHibernate 2.0-RC1 was posted a few weeks back.

One of the features this offers is better “offical” support for Spring 2.5.x and Spring 3.0, through extension projects.

In doing this, dpHibernate is now significantly more extensible, by having all the various dependencies wired in at runtime, allowing developers to wire in their own components if required.

However, the current implementation is very verbose in the amount of boiler-plate wiring required to get dpHibernate up-and-running. This example illustrates my point nicely.

In the future, I plan on building out a much better support for sensible defaults in the Spring configuration.

Tonight I committed the first step in this on the 2.0 branch – adding support for auto-detecting pre/post update interceptors.

Now, all the wiring required for adding Pre/Post update interceptors is handled using Spring’s existing @Component interface. Eg:

Before:

       <bean id="objectChangeUpdater"
                class="org.dphibernate.persistence.state.AuthenticatedObjectChangeUpdater"
                scope="prototype">
                <property name="preProcessors" ref="dpHibernatePreProcessors" />
                <property name="postProcessors" ref="dpHibernatePostProcessors" />
        </bean>

        <!--  Used in update process, for resolving proxies back to the entity -->
        <bean id="hibernateProxyResolver" class="org.dphibernate.persistence.state.DbProxyResolver"
                scope="prototype">
                <constructor-arg ref="sessionFactory" />
        </bean>

        <!--  Optional.  Pre processors are invoked before an update operation.  Must implement IChangeMessageInterceptor -->
        <util:list id="dpHibernatePreProcessors">
                <ref bean="uniqueUsernameInterceptor" />
        </util:list>

        <!-- Optional.  Post processors are invokes after an update operation.  Must implement IChangeMessageInterceptor -->
        <util:list id="dpHibernatePostProcessors">
                <ref bean="passwordEncryptionInterceptor" />
        </util:list>
        <!-- An example of a customized message interceptor.
        CHecks to see if a username is unique in the database before performing an Create or Update on the ApplicationUser -->
        <bean id="uniqueUsernameInterceptor"
                class="com.mangofactory.pepper.service.interceptors.UsernameExistsChangeMessageInterceptor"
                autowire="constructor" />

After:

All the above XML is replaced by simply adding the @Component declaration to the top of your interceptors:

@Component
public class UsernameExistsChangeMessageInterceptor implements
        IPreUpdateInterceptor

…etc.

Notes:

  • You can still declare the interceptor beans themselves in the XML config if you prefer.  These will be detected along with any interceptors annotated with the @Component annotation.
  • You can override this behaviour by explicitly wiring in the preProcessors and postProcessors if you choose.  The autowired values are only used if the value has not been explicitly set.
  • In order to be eligible for autowiring, your interceptors must implement either IPreUpdateInterceptor or IPostUpdateInterceptor.  These are two new interfaces, both which subclass IChangeMessageInterceptor.  They don’t add any new methods, merely serve as a marker interface to indicate when in the lifecycle of the update they should be used.
  • For any of this to work, you need to be using the <context:component-scan  /> declaration in your spring context.  (See Section 3.12.2 in the spring docs here.
Tagged

Annotations in dpHibernate

dpHibernate now has support for annotations to provide fine grained control over how properties / objects are serialized when sending them to the client.

There’s 3 annotations available – I’ll discuss them here:

@NeverSerialize

This does pretty much what it says on the tin.  A property or class annotated with @NeverSerialize will never be serialized by dpHibernate when sending from Java -> Flex.

This is the server side equivilant of the [Transient] metatag in Flex.

For example:

public class ApplicationUser {

    private String password;

    @NeverSerialize
    public String getPassword() {
        return password;
    }

    public void setPassword(String password) {
        this.password = password;
    }
}

In this example, the password property is never serialized and sent to the client. Note – this doesn’t affect the deserialization behaviour when sending from Flex -> Java — which is handy when it comes to dealing with entity persistence.  You can set the Password property on the client and send it to the server and property updates are handled fine.  However, the value is never sent back to the client.

@AggressivelyProxy

The default behaviour and intent of dpHibernate is to handle lazily loaded properties from Hibernate.  This makes sending large collections of entities across to the client extremely efficient. However, sometimes you can have a situation where at serialization time, the collection / property has been fully hydrated by Hibernate – so there’s no lazy-loading going on.  In these situations, the default behaviour by dpHibernate is to send the full object (property / collection).  However, this can be expensive, and may not be neccessary.

@AggressivelyProxy tells the serializer to proxy the target, even if there’s no lazy-loading invovled.

For example, let’s consider an imaginary facebook clone, and the piece of code that handles populating the news feed for a given user.  The returned list of NewsFeedEntries is actually populated from various different feeds, and aggregated together. Eg:

NewsFeedService.java

    public NewsFeed getNewsFeed(ApplicationUser user)
    {
        NewsFeed newsFeed = new NewsFeed();
        newsFeed.addNewsFeedEntries(friendFeed.getTimelineEntries());
        newsFeed.addNewsFeedEntries(eventsFeed.getTimelineEntries());
        newsFeed.addNewsFeedEntries(myFeed.getTimelineEntries());
        return timeline;
    }

NewsFeed.java


public class NewsFeed {

    private List<NewsFeedEntry> newsFeedEntries;

    public void setNewsFeedEntries(List<NewsFeedEntry> timelineEntries) {
        this.newsFeedEntries = timelineEntries;
    }
    @AggressivelyProxy
    public List<NewsFeedEntry> getNewsFeedEntries() {
        return newsFeedEntries;
    }
    public void addNewsFeedEntries(List<NewsFeedEntry> entries)
    {
        newsFeedEntries.addAll(entries);
    }
}

Here, the newsFeedEntries is likely to have many many entities inside it.  However – given that it’s an aggregate of entities from other services, it’s probably not a lazy loaded collection from Hibernate.

The default dpHibernate behaviour is to serialize all the entities inside newsFeedEntries.  By adding the @AggressivelyProxy annotation, dpHibernate will proxy the collection, and the entities will be loaded from the cleint when they’re needed, giving a significant performance gain.

@EagerlySerialize

Overrides the default behaviour of the serializer, to always serializing the value (ie., never send a proxy).

The default behaviour of dpHibernate is to send a proxy for as many complex properties as it can, allowing the client to request the data as/when it’s needed.  This behaviour is often more efficient (minimizng the payload, and deserialization time on the client).

However, if the client is likely to always need a certain property, this can result in unnecessary trips being made to the server – adding unwanted latency.  By usiing the @EagerlySerialize annotation, dpHibernate will always send the real value of a property, and never a proxy in it’s place.

For example, continuing the example of the facebook clone discussed above – let’s assume that whenever we send a list of NewsFeedEntry objects to the client, that the client will always want to know the user the news feed item is about.

The default behaviour of dpHibernate is to proxy this property, which would result in an immediate second call to the server.  Instead, by annotating with @EagerlySerialize, the property is not proxied, and the real value is sent to the client:


public class NewsFeedEntry {

    private ApplicationUser entryUser;

    @EagerlySerialize
    public ApplicationUser getEntryUser() {
        return entryUser;
    }

    public void setEntryUser(ApplicationUser entryUser) {
        this.entryUser = entryUser;
    }

}

Tagged

Batch Loading Proxies in dpHibernate

I want to share a new feature of dpHibernate I’ve just committed to the codebase – batch loading.

The default behaviour of dpHibernate in the Flex Client is to lazy load any property that is required, as soon as it is first asked for.

This is all well and good, but in some cases, can generate a very high load on your webserver, and in turn – your database.

Take for example the following screenshot of an upcoming dpHibernate demo project:

For every question shown, there’s a list of associated tags.  This list is lazy loaded, so dpHibernate will fetch it as soon as it’s requested.  However, in the above screenshot, that generates a server request for each of the 21 displayed tags.  Hardly efficient.

dpHibernate now supports batching of requests for proxies.  It works as follows:

  • When a request is made to load a proxy from the flex client, dpHibernate postpones sending the request for a short time (the default is 50 milliseconds)
  • If another request is received during this delay, we extend the delay another 50 milliseconds, up to a maximum delay threshold (the default is 350 milliseconds)
  • Once either no more requests are made, or the maximum delay threshold is exceeded, send a request to load all the batches in a single server request.
  • On the server, make a single trip to the database to load all the entities
  • Return the loaded entites to the client

So, where previously this would’ve taken 21 server calls & 21 database trips, we’ve reduced it down to 1.  Not bad, eh?

Configuration

This option is disabled by default.

To enable it, set the operationBufferFactory property of the HibernateRemoteObject to an instance of a LoadDPProxyOperationBufferFactory.  (Terrible names, I know, but naming is hard.)  Here’s an example:

<?xml version="1.0" encoding="utf-8"?>
<fx:Group xmlns:fx="http://ns.adobe.com/mxml/2009"
           xmlns:s="library://ns.adobe.com/flex/spark"
           xmlns:mx="library://ns.adobe.com/flex/mx" xmlns:digitalprimates="http://www.digitalprimates.net/2007/mxml">
    <fx:Declarations>
        <digitalprimates:HibernateRemoteObject id="dataAccessService" destination="dataAccessService" operationBufferFactory="{bufferFactory}"  />
    </fx:Declarations>
    <fx:Script>
        <![CDATA[
            import net.digitalprimates.persistence.hibernate.rpc.LoadDPProxyOperationBufferFactory;
            [Bindable]
            public var bufferFactory:LoadDPProxyOperationBufferFactory = new LoadDPProxyOperationBufferFactory();
        ]]>
    </fx:Script>
</fx:Group>

On the server, you need something to handle the load requests.  This isn’t trivial, and historically we’ve left writing server side implementations up to individual projects.  However, there’s now a default DataAccessService which provides support for all dpHibernate features out-of-the-box.  I’ll write more about this later, but if you can’t wait, head over and grab the latest source, or browse the class directly.  (Note, there’s an implementation geared towards Spring environments too here).

Tagged

Entity Persistence with dpHibernate – Overview

dpHibernate is an open source project that facilitates lazy loading of hibernate entities with BlazeDS and Flex.

Note:  If you’re not familiar with the dpHibernate project, it may to have a read of this overview post

The topic discussed here is currently only available on the changeMessaging branch of dpHibernate.  The code is available here:

http://dphibernate.googlecode.com/svn/branches/changemessaging

Recently I committed some changes to a new branch of the source which now allows persistence operations (Create, Update, Delete) on entities, using a few lines of code.

Eg:

public class SomePM {
    public void doSomeWork(author:Author) {
        author.name = “Josh Bloch”;
        author.age = 30;
        author.publisher = publishers.findByName(“AddisonLee”);
        author.save();
    }
    public function createNewAuthor():void {
        var newAuthor : Author = new Author();
        newAuthor.name = “Sondhiem”;
        newAuthor.age = 55;
        newAuthor.save();
    }
    public function deleteAuthor(author:Author):void
    {
        author.deleteRecord();
    }
}

This is similar to the entity persistence provided by LCDS, but without the exorbitant licensing fee!

The save() and deleteRecord() methods are made available through the new IUpdatableHibernateProxy interface. Generally, these would delegate work back to the HibernateUpdater class. Examples of this implementation are shown later.

This approach saves the process of having to write individual update services for each entity (which, often requires several methods for each field. Painful!)

Eg:

// Actionscript:
[Managed]
public class Author extends BaseEntity {
    public var id : int;
    public var name : String;
    public var age : int;
}
public class BaseEntity extends HibernateBean {
    public function save(responder:IResponder = null) : AsyncToken
    {
        return HibernateUpdater.save(this,responder);
    }
    public function deleteRecord(responder:IResponder=null) : AsyncToken
    {
        return HibernateUpdater.deleteRecord(this);
    }
}

From this, Author, and all other subclasses of BaseEntity inherit a save method (which handles both creates and updates of entities) and delete method.

Note, that should you wish to keep persistence methods out of your entity classes, the following is also valid:

[Managed]
public class Author implements IHibernateProxy {
    public var id : int;
    public var name : String;
    public var age : int;
    // IHibernateProxy impl. Excluded...
}

public class SomePM {
    public function updateAuthor(author:Author):void
    {
        HibernateUpdater.update(author);
    }
    // Or, for that matter...
    public function updateAnyEntity(entity:IHibernateProxy):void
    {
        HibernateUpdater.update(entity);
    }
}

Or, if you prefer a more dependency injection friendly implementation:

public class SomePM {
    [Inject] // Actual injection depends on your DI framework of choice.
    public var hibernateUpdater:IHibernateUpdater;

    public function updateAuthor(author:Author):void
    {
        hibernateUpdater.update(author);
    }
}

(Note, this would also work within the Author class itself, should you prefer)

The possibilities for implementation are pretty open, and not overly prescriptive from the dpHibernate framework. The only requirements in terms of your entity classes to facilitate persistence are that they are tagged [Managed] (or implement IManaged), and implement IHibernateProxy.

Nested properties and collections

Both nested properties and changes to collections are supported.

Eg:

Nested properties

public class SomePM {
    public function doSomeWork(book:Book):void
    {
        book.author.name=”newName”;
        book.save();
    }
}

Collections

public class SomePM {
    public function doSomeWork(author:Author,book:Book):void
    {
        author.books.addItem(book);
        author.save();
    }
}

A combination of both

public class SomePM {
    public function doSomeWork(author:Author,book:Book):void
    {
        author.books.addItem(book);
        book.title=”This is the new title”;
        author.save(); // Also persists the change to book.title,
        // as it's part of the author.books collection.
    }
}

Efficient .save()’s

When calling save() on an entity, the actual entity itself is not passed across to the update service. For complex entity trees with collections and nested hierarchies, serialization within the Flash player can be painfully slow.

Eg, consider the following common example:

[Managed]
public class Author extends BaseEntity {
    public var id : int;
    public var name : String;
    public var age : int;
    public var books : ArrayCollection;
}
[Managed]
public class Book extends BaseEntity {
    public var title : String;
    public var author : Author;
}

The Parent entity – Author, contains a list of children entities – books, each of which contains a reference back to it’s parent. These circular references are commonplace, and required in many scenarios by Hibernate.

If we were to use standard AMF serialization for this, the nested tree can get very deep very quickly, depending on the number of Books an Author has written. Additionally, this cost is completely unnecessary if the only thing changed on the Author is it’s name.

Also, given that dpHibernate facilitates lazy loading, it’s quite possible that the property Author.books hasn’t even been populated yet at the time of serialization. Standard AMF serialization would trigger this loading unnecessarily.

Instead, only a list of changes that have occurred to the entity are passed along. This keeps the serialization process light, and quick. Nested properties & collections are handled, and circular references like the one described above are handled efficiently, ensuring that each entity is processed only once.

Essentially, instead of sending through the entire updated entity, dpHibernate sends a collection of changes through to be processed by the server – i.e. :

  • Author record 123 : name changed from “Josh bloch” to “Josh Bloch”
  • Author record 123 : books collection now contains :
    • Book record 123
    • Book record 456
    • A new book record, tentatively identified as ABC
  • Book record 456 : title changed from “effective .net” to “Effective Java”
  • Book record ABC created.

These records are simple objects of primitive types, so serialize very quickly. Once on the server, they are ordered to ensure that dependencies are processed in the appropriate order. Ie, The change record for the creation of Book ABC is processed before it is added to the books collection of Author 123.

Considerations

The entity persistence code of dpHibernate is still new, so typical warnings about bugs apply. That said, dpHibernate is currently being used successfully within a large scale commercial project.

Also, currently these changes to dpHibernate require that your project is using Spring on the Java tier. This dependency will be removed shortly, however for this reason, the entity persistence code is kept separate from the core dpHibernate trunk.

Finally, there are performance implications which you must be careful of. However, these can be easily mitigated through use of new custom metatags, and avoiding state tracking on all entities by default. Fine tuning dpHibernate for performance is discussed in a later blog post.

Tagged