Dynamic Spring Security Sample

Introduction

We have seen in the previous article Dynamically Securing Method Execution with Spring Security how it is possible to exploit the nature of Spring Security ACL module to dinamically secure methods access. Here we detail this solution with a working example. The example source code is available in SpringDynamicSecurityExample. It is based on the Spring Security “contacts” sample that you can find in Spring Security Samples.

A short summary

We have seen that the Sid entity could represent a Principal or a GranthedAuthority. This is the crucial point that allows us to exploit the ACL itself to secure methods execution in a fully dynamic way. We recall here that every secured object in the ACL model is associated with one and only ACL entity.

The ACL entity can have multiple Access Control Entries which are represented by Permission, Sid and Acl instances. An ACE in which the Sid is a GrantedAuthority can be seen as a permission on an object granted to a Role, where the Role is the GrantedAuthority.

If our goal is to secure method execution (normally we would secure service public methods) then the object associated with the ACL would be a method and the permission would be related to its execution. So we can define a custom permission calling it ‘execute’ for instance.

The Acl would represent a method execution with its set of ACEs with the ‘execute’ permission granted to a user or role (i.e. to a Principal or GrantedAuthority.

The only thing we would have to do then is to define a PermissionEvaluator with a custom permission factory, and a custom voter. We will see below how we can implement a simple example.

Dynamic Spring Security Sample

The example runs with a HSQLDB database in memory. The DataSourcePopulator class initializes the db with all the ACL tables and records and creates two users, ’granted’ and ‘notGranted’. The first user is given the execution permission on the method ‘secure’ of the TestSecuredMethodService class. The second user does not have any permission.

The execution permission is implemented by the class CustomPermission:

package dynamicsecurity.methodsecurity;

import org.springframework.security.acls.domain.BasePermission;

public class CustomPermission extends BasePermission {

	public static final CustomPermission EXECUTE = 
              new CustomPermission(
			1 << 5, 'E');

	protected CustomPermission(int mask) {
		super(mask);
	}

	protected CustomPermission(int mask, char code) {
		super(mask, code);
	}
}

And we also have a custom permission factory that registers our custom permission:

package dynamicsecurity.methodsecurity;

import org.springframework.security.acls.domain.DefaultPermissionFactory;

public class CustomPermissionFactory extends DefaultPermissionFactory {
	public CustomPermissionFactory() {
		super();
		registerPublicPermissions(CustomPermission.class);
	}
}

The custom permission factory is configured in the file applicationContext-security.xml, where the permission evaluator is given our custom permission factory as the value for the “permissionFactory” property. In the same file the access decision manager is configured with a custom voter which is implemented as:

import java.util.Collection;

import org.aopalliance.intercept.MethodInvocation;
import org.springframework.aop.framework.ReflectiveMethodInvocation;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.access.AccessDecisionVoter;
import org.springframework.security.access.ConfigAttribute;
import org.springframework.security.access.PermissionEvaluator;
import org.springframework.security.acls.model.MutableAclService;
import org.springframework.security.core.Authentication;



public class CustomVoter implements AccessDecisionVoter {

	
    @Autowired
    private MutableAclService mutableAclService;
    
    @Autowired
    private PermissionEvaluator permissionEvaluator ;
	

	public boolean supports(ConfigAttribute attribute) {
		return true;
	}

	public boolean supports(Class<?> arg0) {
		return true;
	}

	public int vote(Authentication authentication, Object obj,
			Collection attributes) {
		
		if(object instanceof  ReflectiveMethodInvocation){
			MethodInvocation methodInvocation = 
(MethodInvocation) obj;			
			MethodWrapper methodWrapper = 
new MethodWrapper(methodInvocation.getMethod());	
			boolean haspermission = 
permissionEvaluator.hasPermission(authentication, 
     methodWrapper, CustomPermission.EXECUTE);					
			if (!haspermission) {
				return ACCESS_DENIED;
			}					
			
		}

		return ACCESS_GRANTED;		
	}

}

The vote(…) method checks first if the object passed as parameter is an instance of ReflectiveMethodInvocation. If this is the case it means that a method annotated with spring security @Secured, @preAuthorize or @postAuthorize is being executed. Since in our specific model we want to mark a method to be secured but without any reference to roles we choose to implement our custom annotation like the following:

package dynamicsecurity.methodsecurity;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import org.springframework.security.access.annotation.Secured;


@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Secured("ROLE_DUMMY")
@interface SecureMethodExecution  {
   
}

This annotation uses the @Secured annotation as a meta-annotation and the “ROLE_DUMMY” string does not represent, as its name implies, a meaningful role, its only purpose is to make our SecureMethodExecution annotation recognized by Spring Security.Our vote(…) implementation uses the permission evaluator to check if the authentication object that represents the authenticated principal has the EXECUTE permission on the method that is being executed.The MethodWrapper class is a wrapper around the Method object retrieved from the ReflectiveMethodInvocation instance. Its purpose is to provide an ID by which it could be stored in ACL as a secured object. As you can see the constructor calculates an id using the object meta-information:

package dynamicsecurity.methodsecurity;

import java.lang.reflect.Method;
import java.lang.reflect.Type;

public class MethodWrapper {


	private Method method;
	private int id;
	
	public MethodWrapper(Method method) {
		super();
		this.method = method;    
		Class<?>[] pType  = method.getParameterTypes();
		Type[] gpType = method.getGenericParameterTypes();
		String parTypes = "";
		for (int i = 0; i < pType.length; i++) {
			parTypes += "-" + pType[i];
		}
		for (int i = 0; i < gpType.length; i++) {
			parTypes += "-" + gpType[i];
		}
		String identifier = method.getDeclaringClass()
                .getName() + "." + method.getName() + "-" +  parTypes;	
	    int sum = 0;
	    for (char c : identifier.toCharArray()){
	    	sum += (int)c;	    	
	    }
	    this.id = sum;
	}
	


	public int getId() {
		return id;
	}
}

Here we see the TestSecuredMethodService in which the method secured is marked with @SecureMethodExecution while the notSecured() method as its name implies is not secured:

package dynamicsecurity.methodsecurity;


import org.springframework.stereotype.Component;
import org.springframework.stereotype.Service;

@Component
@Service
public class TestSecuredMethodService {

	@SecureMethodExecution
	public String secured() {
		return "secured";
	}

	public String notSecured() {
		return "notSecured";
	}

}

These methods can be executed by a jsp page by the TestSecureMethodController:

import java.util.HashMap;
import java.util.Map;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.servlet.ModelAndView;

import dynamicsecurity.methodsecurity.TestSecuredMethodService;

@Controller
public class TestSecureMethodController {


	@Autowired
	private TestSecuredMethodService testSecuredMethodManager;



	@RequestMapping(value = "/secure/executeSecuredMethod.htm", 
           method = RequestMethod.GET)
	public ModelAndView executeSecuredMethod() {

		String result = testSecuredMethodManager.secured();
		Map<String, String> model = 
                  new HashMap<String, String>();
		model.put("TestSecuredMethodResult", result);
		return new ModelAndView("projectx/testSecureMethod", 
                  model);
	}
	
	
	@RequestMapping(value = "/secure/executeNotSecuredMethod.htm", 
method = RequestMethod.GET)
	public ModelAndView executeNotSecuredMethod() {

		String result = testSecuredMethodManager.notSecured();
		Map<String, String> model = 
                  new HashMap<String, String>();
		model.put("TestSecuredMethodResult", result);
		return new ModelAndView("projectx/testSecureMethod",
                   model);
	}
	
}

If we run the application we have the following initial page:

First

Clicking on “Test Method Security” we are required to login:

Second

We have two users “granted” and “notGranted” with password “user”. When we pass the login we are given the following page:

Third

The “Execute Secured Method” will execute the secured() method of the TestSecuredMethodService class and the “Execute Not Secured Method” the notSecured() one. If we login with “granted” user and click on the first link we will see the page below with “Method executed” message.

Fourth

If we login with the “notGranted” user we will see the following error page:

Fifth

If with any of the users we click on the second link we will have the “Method executed!” page because the “notSecured” method is not under security control, i.e. is not annotated with @SecureMethodExecution.

How to load Tiles definitions programmatically

Introduction

When implementing some sort of web application’s plugin architecture in which the layout is based on the Tiles framework (https://tiles.apache.org/) there could be the need to reload new tiles definitions on the fly.

How to load Tiles definitions programmatically

To load new tiles definitions first of all we get the Tiles container passing the servlet context to the getContainer method of TilesAccess class:
TilesContainer container = TilesAccess.getContainer(servletContext);

Then, assuming it is a BasicTilesContainer we get the definition’s factory out of it with the following:

BasicTilesContainer basic = (BasicTilesContainer) container;
DefinitionsFactory factory = basic.getDefinitionsFactory();

And, if we have the list of our tile definitions files in a variable called tilesFilesToLoad we can load them by the following code:

if (factory instanceof ReloadableDefinitionsFactory) {
                ReloadableDefinitionsFactory reloadableDefFactory = 
                	(ReloadableDefinitionsFactory) factory;
                for (File file : tilesFilesToLoad) {
                    URL source = new URL("file://localhost/" 
+ file.getAbsolutePath());
                    ((UrlDefinitionsFactory) reloadableDefFactory)
.addSource(source);
                    rFactory.refresh();
                }
}

This way our new definitions are loaded and available for the application use.

How to dynamically load Resource Bundles in Struts 2

Introduction

Resource bundles are objects that are characterized by a specific ‘Local’, i.e. are specific to particular geographical areas in terms of language, date format and other standards. Usually they are represented by simple properties files in the classpath with a suffix that indicates the targeted local and the application can load the appropriate resource bundle for the current local using this suffix. The resource bundles are usually all loaded during the application’s startup but sometimes there is the need to copy and load other resource bundles dynamically, without the need to restart the application.

How to load ResourceBundles dynamically with LocalizedTextUtil

A common situation in which there could be the need to load new resource bundles on the fly is a typical plugin architecture. We can imagine a web application in which components made of classes , jsp pages, CSS files and of course resource bundles can be dynamically loaded without the need to restart the application.

First of all a resource bundle file should be loaded in the classpath to make it available. If we choose to name the message bundles as global-messages then the following must be set in the struts.properties configuration file:

struts.custom.i18n.resources = global-messages 

Then a classloader must be created with the URL of the path in which the bundle is stored.

        File path = new File(pluginPath);
        URL url = file.toURI().toURL();          
        URL[] urls = new URL[]{url};
        ClassLoader cl = new URLClassLoader(urls);

The classloader is then passed to the the LocalizedTextUtil setDelegatedClassLoader method:

        LocalizedTextUtil.setDelegatedClassLoader(cl);

And finally we use the addDefaultResourceBundle to load the newly copied resource bundle

        LocalizedTextUtil.addDefaultResourceBundle(pluginPath + pluginBundleName);

How to customize the StrutsSpringObjectFactory

Introduction

Spring can use its own MVC or integrate other MVC  frameworks. It can for instance integrate with Struts2 by a specific plugin. The plugin overrides the Struts Object Factory providing a way to configure struts actions as beans in the Spring Context. Sometimes some customized behaviour is needed and the spring plugin as it is is not enough. In this case the StrutsSpringObjectFactory which is the core class of the plugin can be extended and the customized version can be configured instead of the default one.

How to provide a customized StrutsSpringObjectFactory

In order to extend the StrutsSpringObjectFactory the buildBean method in the spring struts plugin should be ovverridden. In the following example the buildBean method is overridden and its logic customized to retrieve a bean from a different spring context than the default one, if it does not exist in the default. This spring context is stored in the ServletContext that can be retreived from the default spring application context.

public class CustomSpringObjectFactory extends StrutsSpringObjectFactory {
…

public Object buildBean(String beanName, Map extraContext, 
		boolean injectInternal) throws Exception {

	XmlWebApplicationContext ctx = (XmlWebApplicationContext) 
          this.appContext;

	ClassPathXmlApplicationContext otherCtx = 
          (ClassPathXmlApplicationContext) ctx.
          getServletContext().
          getAttribute("otherSpringContext");

	Object o = null;

	if (this.appContext.containsBean(beanName)) {
	  o = this.appContext.getBean(beanName);
	} else {
	  if (otherCtx != null) {
	    o = otherCtx .getBean(beanName);
	    return o;
	  }
	}

	if (o == null) {
	  Class beanClazz = getClassInstance(beanName);
	  o = buildBean(beanClazz, extraContext);
	}

	if (injectInternal) {
	  injectInternalBeans(o);
	}

	return o;

}

}
…
}

Finally in the struts.properties configuration file the object factory property should be set with the custom implementation:

struts.objectFactory=org.example.CustomStrutsSpringObjectFactory

Thread scope and ThreadLocal

Introduction

When we implement a java servlet web application we face the problem of choosing in which scope to put information in, depending on the needs. In the normal scenarios we have to cope essentially with Context (i.e. ServletContext), Session and Request scopes.
A slightly different requirement comes out when one wants to store some object or information in the current thread, so that it would be isolated from other threads. Someone might say that the ServletRequest object would fit this requirement, because each request runs in a separate thread and one could simply store the information he wants as a request attribute, but in a classical multilayer application the request object won’t be available in the business logic layer nor in the data access layer.
A java class called ThreadLocal comes in handy in these situations. In this brief article we will describe the very basics of the ThreadLocal’s usage.

A Sample Scenario

As a scenario that could describe well why there is the need of storing objects in a Thread scope we will describe the implementation of a tipical Data Access Object layer keeping it as simple as possible. We have picked this example only because it explains well the point, but please keep in mind that there are many out the of the box solutions for DAO’s pattern, both in the form of pure JDBC (Spring’s JdbcTemplate) and more advanced ORM frameworks and there is no point to, as they say, ‘reinvent the weel’.
A tipical issue with implementing a DAO layer is to emcompass two or more database operations in a single transaction. Here we are talking about operations that change the data state, such as inserts, deletes and updates. To deal with this we can create the JDBC connection set the autocommit property to false and pass it as a parameter to each of the DAO method calls involved, then after having called all the methods in the transaction execute commit (or rollback in case of errors) on the connection object.
This approach though couples the DAO methods signature with the JDBC connection. It would be nice if we could get a ‘cleaner’ version of our DAO interfaces, making it unaware of the connection. A way to do this would be to implement some sort of transaction manager class with static methods whith the responsibility of creating the connection (or getting it from a connection pool), storing it in the current thread, giving it to the DAO obect caller and handle the transaction boundaries between the DAO method calls. The part of storing the connection in the current thread can be made by using a Java class called ThreadLocal, in the following paragraph we show a simple example on how this can be done.

Concrete Example

In the following example we use very simple classes just to explain how the all ThreadLocal thing works. First of all we implement a minimal transfer object:

public class SampleTranferObject {

	private String sampleField;

	public String getSampleField() {
		return sampleField;
	}

	public void setSampleField(String sampleField) {
		this.sampleField = sampleField;
	}

}

A simple DAO interface and implementation:

 import java.util.List;

 public interface SampleDao
 {

  public void addSampleField(SampleTranferObject tranferObject);

 }

import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.List;

public class SampleDaoImpl implements SampleDao

{

	public void addSampleField(SampleTranferObject tranferObject) {
		Connection con = null;
		Statement statement = null;

		String sql = "insert into SampleTable values("
				+ tranferObject.getSampleField() + ")";
		try {
			con = SampleTransactionManager.getConnection();
			statement = dbConnection.prepareStatement(sql);
			statement.executeUpdate(sql);
		}
		catch (SQLException e) {
			e.printStackTrace();
		} finally {
			if (statement != null) {
				try {
					statement.close();
				} catch (SQLException e) {
					e.printStackTrace();
				}
			}

		}
	}
}

And finally our transaction manager:

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;

public class SampleTransactionManager

{
	Connection connection = null;
	static ThreadLocal local = new ThreadLocal();

	public static void startTransaction() throws SQLException

	{
		Connection con = DriverManager.
getConnection("jdbc:mysql://localhost:3306/mysql");	
		con.setAutoCommit(false);
		local.set(con);			
	}

	public static Connection getConnection() throws SQLException

	{
		Connection con = local.get();
		return con;
	}

	public static void commit()
	{
		Connection con = local.get();
		if(con != null){
			try {
				con.commit();
				con.close();
			} catch (SQLException e) {
				e.printStackTrace();
			}
		}

	}

	public static void rollback()
	{
		Connection con = local.get();
		if(con != null){
			try {
				con.rollback();
				con.close();
			} catch (SQLException e) {
				e.printStackTrace();
			}
		}

	}

}

The class SampleTransactionManager has a startTransaction method that creates a new connection, set the autocommit to false so that each database operation is not commited until the commit method on the connection is explicitly called, and finally stores the connection in a static ThreadLocal variable. The magic behind ThreadLocal class makes the connection beeing actually stored in the current Thread. This method will be called outside of the DAO object method calls to mark the transaction’s start.
The method getConnection retrieves the connection from the ThreadLocal variable and returns it to the caller, i.e. the DAO object.

Let’s put this all together in the following code:

    SampleDao employeeDao = new SampleDaoImpl();
    SampleTranferObject tranferObject1 = new SampleTranferObject();
    SampleTranferObject tranferObject2 = new SampleTranferObject();
    tranferObject1.setSampleField("sampleValue1");
    tranferObject2.setSampleField("sampleValue2");
    try {
	SampleTransactionManager.startTransaction();	
	employeeDao.addSampleField(tranferObject1);
	employeeDao.addSampleField(tranferObject2);
	SampleTransactionManager.commit();

    } catch (SQLException e) {
	SampleTransactionManager.rollback();
    }

The employeeDao instance is used to add two values to a database table by tranferObject1 and tranferObject2 variables. As we see the SampleTransactionManager is used to start the transaction and to commit or rollback. The addSampleField method does not need the connection to be passed in as a parameter since, as we see in the DAO implementation it is retrieved internally using the static SampleDao’s getConnection method.

Conclusions

We have seen how the use of ThreadLocal allows us to access the current thread context and store objects in it. In this particular example we managed to keep the DAO methods signature ‘cleaner’ and independent of the connection in a tipical transactional scenario (what shown here could be improved to get the DAO internally free of JDBC boilerplate code and focused mainly on SQL , like Spring does with JdbcTemplate).

How to modify the servlet request

Introduction

Dealing with http web frameworks, one day or the other one has to cope requirements that go beyond the standard features offered by the chosen platform.  A common issue would be to change the request submitted by the client or the response content returned to it on the fly. Java Servlet technology deals with these issues basically with servlet filters and some other complementary trick the we will explain in the following paragraphs.

How to change the http request

Changing the servlet http request can  be done using the servlet filter mechanism, but that is not enough.  Most of the HttpRequest object fields are read-only ones, since the standard scenario does not cover the possibility that the original request information submitted by the client could be changed. The strategy to overcome this limitation is to wrap the request in another class, customize the wanted getter methods and submit to the filter chain the wrapper object instead of the original request. Java servlet API already comes with two classes named ServletRequestWrapper and HttpServletRequestWrapper that can be used as wrappers for the request.  In order to change the original request one has to create a class that extends one of these two classes, depending on what fields are need to be changed (if the field is available in the ServletRequest class just extending the ServletRequestWrapper will do). Then in this class one can overwrite the getter methods that provide the needed fields and implements the wanted logic to retrieve their custom values. Finally in the Servlet Filter doFilter method an instance of the wrapper class is created passing the original request in its constructor and then passed to the chain doFilter method call instead of the original request.

Http request change example

The following is a simple example of how the all think works:

public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {

  ServletRequest requestModified = new HttpServletRequestWrapper(
  (HttpServletRequest) request) {
    @Override
    public String getParameter(String name) {
      String paramValue = super.getParameter(name);
      if(name.equals("parameterThatNeedsDefault") 
         && paramValue == null) {
        return "defaultValue";
      }
      return paramValue;
    }
  };

  chain.doFilter(requestModified, response);
}

In this example a default value is provided when a specific request parameter is found null. An anonimous inner class is used to extend the HttpServletRequestWrapper to make the implementation terser. The example above is trivial, nevertheless it shows just what is needed to hack into the servlet request lifecycle.

Transform the http response

In the following post  How to transform the servlet response content we will describe how to transform the response content just before it is sent back to the client.

How to transform the servlet response content

Introduction

In the previous post How to modify the servlet request we explained how to change the servlet request on the fly, using the servlet filter mechanism and the class HttpServletResponseWrapper. In the following paragraphs we are going to show how to change the response content just before it is sent back to the client.

How to tranform the http response content

We can tranform the servlet http response content just before it is sent back to the client, using the servlet filter mechanism and the class HttpServletResponseWrapper. The wrapper is needed because the output stream of the original response is handled and closed by the servlet engine. What we need is to take the original content generated by the servlet, transform it and writing it again in the response output stream, but we cannot do this in the normal request-response flow because as soon as the content is written to the response outputstream the latter gets closed and it not more possible to write anything in it. The solution is to wrap the response in an extension of the class HttpServletResponseWrapper and provide it with a custom outputstream. The wrapper is passed then to the chain execution of the filter instead of the original response and the servlet engine will write its content on the custom outputstream. Then the content will be taken, tranformed and written to the original response outputstream. In the following paragraph a simple example is shown.

Http response transformation example

The following is a simple example of how the all thing works. First of all we define a Wrapper that extends the HttpServletResponseWrapper:

public class CustomResponseWrapper extends
   HttpServletResponseWrapper {

   private CharArrayWriter output;

   public CharResponseWrapper(HttpServletResponse response){
      super(response);
      output = new CharArrayWriter();
   }

   public String getResponseContent() {
      return output.toString();
   }

   public PrintWriter getWriter(){
      return new PrintWriter(output);
   }
}

Then in our servlet filter’s doFilter method we implement the response transformation logic:

public void doFilter(ServletRequest request, 
   ServletResponse response, FilterChain chain) throws
   IOException, ServletException {

   PrintWriter out = response.getWriter();
   CustomResponseWrapper wrapper = new CustomResponseWrapper(
         (HttpServletResponse)response);

   chain.doFilter(request, wrapper);

   CharArrayWriter writer = new CharArrayWriter();
   String originalContent = wrapper.getResponseContent();
   writer.write("<h1>Added Title</h1>"); 
   writer.write(originalContent.substring(originalContent.indexOf("") 
   + "".length + 1));
   response.setContentLength(writer.toString().length()); 
   out.write(writer.toString()); 
   out.close(); 
}

Here the doFilter in the chain object is executed with the wrapper instead of the original response. When the chain’s doFilter execution completes the content is taken from the wrapper , gets manipulated using a charArrayWriter and a HTML h1 title is added to it. Finally the changed content is written to the original response outputstream and the latter is closed.

Java 8 new main features and Java Core evolution

Current IT Scenario

The current IT market is characterized by a fast change. Many different issues are bringing great complexity. Among them we can mention the need for multi-tenancy and cloud platforms. These new paradigms require more sophisticated instruments to implement solutions at an effective production pace. Some issues in the software lifecycle must be dealt with new processes and methodologies, other are more focused on tools. When it comes to discuss about tools, we have a lot of choices among frameworks and middleware. But what about the Java core? Is the language itself fully fitted for the current IT market and for the near future?

New Java core capabilities

If we look at the latest Java releases, until Java 8, we see that some new features have been implemented. Among the main features the Generic Programming and the Functional Programming deserve a special attention. They fill a gap in the development world and let Java keep the pace with other technologies. In the following two paragraphs we summarized the two.

Generic Programming

Generic programming can be defined in a simple way as a style of computer programming in which algorithms are written in terms of types that can be specified parametrically. Generic programming provides a different concept of reuse than general object oriented paradigm: when a set of classes have the same behavior and data structure and differ only in types there is the opportunity to exploit the features of generic programming. Generic programming provides a way to define reuse that is complementary to the usual Object Oriented programming, it is based more on templating than on inheritance and other OO concepts.

Functional Programming

Giving a precise and definitive definition of Functional programming is not so easy as most of the literature on the subject lack any formal approach and scatters across a variety of different descriptions. As a Object-Oriented programming (OOP) language, Java was originally designed to primarily support the standard imperative (procedural) programming. With imperative programming the code is written in such a way that describes in exact detail the steps to accomplish a task. We can also call this algorithmic programming. In this style of programming some information must be stored in a shared manner. Functional programming avoids to put status information in ‘external’ variables and instead consists in composing a problem as a set of self-contained functions to be executed. Each function has an input return an output and the output of one function could the input of another one. There is no storing of whatsoever status information in the run. We can say that functional approach is focused not on how to perform tasks (algorithms) but on what information and what transformations are required to obtained that information. Even if Java is not a pure functional language it has introduced it by the use of Streams and Lambda functions. That lacks of the performance power of a pure language but gives the opportunity to approach specific problem with a more robust and less error prone approach.

Is that enough?

The support for Generic and functional programming are certainly of great importance and cover programming issues that otherwise should be addressed outside the scope of Java stack technologies, but there are also other issues that are lacking a sufficient support. JEE offers in some way a standard and following standards is a good thing, but its releases follow a very large timeframe, you have to wait years to have something new which is not in touch with the rapid growth of new requirements from the IT world. The new IT tendencies are toward SaaS and cloud paradigms, which require highly configurable systems. Many software features need to be configured dynamically in a programmatic way. Designing the solutions as plugin oriented ones is a must, and is also a must that the components could be to dynamically installed and uninstalled. Dealing with these issues requires the use of some underlying framework that must be robust, flexible, maintainable and as standard as possible. The OSGi technology was meant to be a standard to cope with these requirements, but is it really a viable solution?

OSGi

OSGi is a standard developed by the Java IT industry aimed at implementing dynamic modular systems. It can deal with components that can be installed, started, stopped and uninstalled on the fly, without affecting the whole application server lifecycle, and it also maintains a registry that can adapt accordingly to the components installation state. The main drawback is that its implementations are somewhat heavy-weight and complex, they not fit well with other frameworks (Spring, Struts…) and they require a specific running environment on application server side. It is definitely not a lightweight solutions. It addresses the problem to deal with highly dynamic, components based systems, but it imposes too many constraints on the overall software ecosystem. The only possibility that it could have to gain success in the future would be if all the application server vendors would be turned into considering it a definitive standard. But normally the software frameworks have really success if they fit well with world and not if the world fit well with them. All the crucial features that the IT market requires should be addressed by the Java core platform in the first place and leave to the external frameworks only the burden of the high level stuff.

Java Core And Dynamic Class Loading

The main problem in developing lightweight solutions lies in the lack of support from the Java core itself. Java classes can be loaded dynamically by the use of the Class Loader architecture, but when it comes to load and unload classes on the fly, it is not so straightforward and a special attention is required. Furthermore the things become more complex if we must deal not just with the lifecycle of single classes but with bundles of classes as a whole (as jar libraries for instance). There is no reliable mechanism to deal with this. And also there is no support to handle and share different versions of the same library on the virtual machine, something like the Global Assembly Cache of the .NET platform, for instance. It is frankly difficult to understand why this has not deserved any attention in the latest releases of Java, since the overcome of this limitation would be a great impulse to address even that web applications market segment that is up to now almost fully owned by dynamic language platforms like PHP.

Conclusion

The lack of a support for the reliable handling of dynamic loading and unloading of class bundles in the Java core and their versioning represents a great obstacle to the growth of mature and flexible solution based on Java Stack in the new IT scenarios based on highly dynamically configurable systems. If the future releases of java core will fill this gap, heavy implementations like OSGi will lose importance and there will be much more room for light and extensible implementations.

Dynamically Securing Method Execution with Spring Security

Preface

Spring Security provides a robust support for securing Spring based applications but it fails in some way when it comes to design dynamically configurable security, especially regarding dynamic configurability of java methods access. How we can overcome these limitations?

Introduction

When we want to secure an application, we must define access policies to its functions and we basically cope with two main models that we can call  ‘role-based security’ and ‘object-based security’, where the first works by defining roles played by users and by them limiting the access to specific system functions while the second focuses on permissions defined on single domain objects. This dichotomy is true also for Spring framework. Spring Security provides both these models and for each provides a robust solution on its own. The role-based security is implemented by the base spring security authorization API and the object-based security by the ACL module. Each solves a particular problem area and perhaps they both cover most of the needs but there are some limitations when it comes to design more advanced solutions. What if we want, for instance, to dynamically configure authorization to methods execution? In Spring Security we can secure methods by setting an annotation with an expression based on a role, but a role is something that is defined and configured in advance, not dynamically. Another possibility would be to use ACL to secure a method based on which permissions a domain object passed as an argument has, but this does not cover the situation in which we only want to authorize the method execution without any reference to its parameters. There are certainly ways of customizing the Spring Security classes to overcome this limitations, but in this article we want to point out a possible solution that exploit the ACL security model itself to provide a unique base for securing the whole application in a dynamic way. But first let’s have a quick  look on  how methods are authorized with  ‘role-based’ security and  ACL in practice.

Role-based security

A Role in spring Security is represented by an instance of   GrantedAuthority class.  A list of GrantedAuthority objects can be stored on an Authentication object to represent the roles played by current authenticated user. The AuthenticationManager is responsible to insert the GrantedAuthority into the Authentication object.  An AccessDecisionManager is responsible for making authorization decisions based on statements configured in Spring xml configuration files or as expressions in annotations. One can implement its own AccessDecisionManager or use one of the Spring implementations based on voting by the AccessDecisionVoter interface. Methods can be secured both with AOP configuration or in a simpler way using annotations and expressions like the following

@PreAuthorize("hasRole('ROLE_USER')") 
public void method();

Secure objects (ACL)

ACL relies on an API backed by database tables to define authorization permissions (like write, delete, admin) on single domain objects. A common way to secure an object is to use the hasPermission expression in an annotation like the following:

@PreAuthorize("hasPermission(#contact, 'admin')") 
public void admin(Contact contact); 

In the example above the method admin execution is authorized only if the current contact parameter has an ‘admin’ permission.

A solution to dynamically secure method execution

Spring does not seem to offer an out-of-the-box solution to dynamically secure methods, i.e. to set the permission to execute a method on the fly. One can limit the access to methods using roles or defining the access permission rules on method parameters by ACL. Roles are a rather static way to define access rules, they must be defined in advance and are course-grained, a role is not directly targeted to a single method or object but represents some general rule that limits the access to certain areas of the application. ACL on the other hand, is used for securing single objects by permissions, which are very fine-grained concepts directly related to the objects to be secured and not to some general application behavior. One way to overcome these limitations would be to perform some customization of the Spring Security API, for instance one can provide its own implementation of AccessDecisionManager class. Nevertheless in the middle of spring security ACL model there is already something that could do the trick, maybe in a more straightforward and cleaner way. The key would be to represent  a role not as a general application behavior associated to a user but simply as a set of permissions. Let’s recall briefly the main entities involved in the ACL design:

  • Acl: it represents an object, normally a domain object by an ObjectIdentity and it stores a set  of AccessControlEntries
  • AccessControlEntry (ACE): it is composed of a Permission, Sid and Acl.
  • Permission: A permission represents what can be done to an object (like write, read, admin) and it is implemented by a particular immutable bit mask.
  • Sid: it represents a Principal or GrantedAuthority.
  • ObjectIdentity: Each domain object is represented internally within the ACL module by an ObjectIdentity.

These classes are persisted to the database by the following set of tables:

  • ACL_SID it stores Sid instances
  • ACL_CLASS its purpose is to identify any domain object class in the system.
  • ACL_OBJECT_IDENTITY stores information for each unique domain object instance in the system, it is related to Acl instances and contains a foreign key to a ACL_CLASS instance representing the object type
  • ACL_ENTRY stores AccessControlEntry instances

If we think that a Sid could represent both a principal or a GranthedAuthority we are taken straight to the point: the ACL model offers us already a way to implement roles as a set of permissions, since an ACE is a set of Permission, Sid and Acl instances. A set of ACEs in which the Sid represents a single GrantedAuthority can be seen exactly as a role made up of a set of permissions.  We can even assign permissions directly to a user, using a Sid as a Principal instead of a GrantedAuthority. But what kind of permission can we associate to a method? What we want to secure is method execution so we can define a custom permission, and we can call it ‘execute’, for instance. We can then represent an Acl as a method execution, precisely as a wrapper of an instance of the java.lang.reflect.Method class. The wrapper is needed to provide an additional id property to identify the specific method execution instance. The ACL will then be given its own set of ACEs with the ‘execute’ permission associated with a user or role (i.e. with a Principal or GrantedAuthority). In order to secure a method then, a custom annotation could be implemented, let’s call it SecureMethodExecution , as:

 

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Secured("ROLE_DUMMY")
@interface SecureMethodExecution {
}

Here the SecureMethodExecution annotation declaration uses the Spring Security’s @Secured annotation as a meta-annotation so that the @SecureMethodExecution is recognized by Spring as if it was @Secured with the attribute value “ROLE_DUMMY”. The sole purpose of the attribute “ROLE_DUMMY” is to get  the default AccessDecisionManager to “think” that @SecureMethodExecution  is a regular @Secured annotation.

Then the methods could be annotated like this:

@SecureMethodExecution 
public void methodName(){...}

Finally a specific implementation of AccessDecisionVoter interface would provide the access logic. The following is an example of what the vote method code might be:

public int vote(Authentication authentication, Object object,
Collection attributes) {

 if(object instanceof ReflectiveMethodInvocation){
   MethodInvocation methodInvocation = (MethodInvocation) object;
   if (methodInvocation.getMethod().getAnnotation(
    SecureMethodExecution.class) != null) {
    MethodWrapper methodWrapper = new MethodWrapper(methodInvocation.
	   getMethod());
    boolean haspermission = permissionEvaluator.hasPermission(
           authentication,methodWrapper, CustomPermission.EXECUTE);
    if (!haspermission) {
     return ACCESS_DENIED;
    }
   }
 }

 return ACCESS_GRANTED;
}

Using this model an user interface could be built up by which ACE instances could be created or removed on the fly for every method that needs to be secured (service methods, usually), without the need of statically configure and restart the application.

Conclusions

Given that there could be several different ways to dynamically secure methods in Spring Security, nevertheless the solution above relies on the spring security architecture itself and it needs only minor customizations.

References

Spring Security reference documentation: http://docs.spring.io/spring-security/site/docs/3.0.x/reference/springsecurity.html