Monday, June 21, 2021

Two One-liners for Quick ColdFusion Static Analysis Security Testing

 I want to find all of the security bugs.  I'm sure you do too.  

(Click here to skip all the background info and just jump to the two one-liners.) 

Some security bug classes are easy to find at scale through automated dynamic security scanning.  Maybe you're also doing some manual application penetration testing.  And maybe you can invest the time to perform in-depth manual code review of important portions of an application, such as core libraries and high-value actions.  But a high-impact vulnerability -- such as remote code execution -- in an insignificant, overlooked portion of your codebase can ruin your day.  Automated code review needs to play a part in any software security effort.

There are a handful of static analysis tools available that support ColdFusion and CFML.  While some of them performed very well in benchmark testing, I found that these tools did not consistently flag some basic vulnerable code statements, even when the "alert on everything" knobs were turned to 11.

I had a collection of grep, awk, and Perl one-liners living in my shell history for quick-and-dirty CFML code reviews.  Most of them had to do with searching for various tags and functions that could be dangerous, and doing further manual review of the results.  But every time I wanted to do some quick automated code review, I had to find them and re-remember how to run and tweak them.  I wanted something that was a little more repeatable, so I wound up building some custom CFML static analysis tooling.  The full toolset isn't being released at this time, but I am releasing two one-liners.  They won't find all of the bugs, hopefully they help you find some bugs.

At its core, it's a "smart" grep that can be used to search for user-controlled input in dangerous tags and functions.  Many dangerous CFML tags and functions are well-known and have been documented elsewhere.  For example, maybe you want to look for tags and functions that can lead to SSRF.  Examples of user-controlled data can include any of the following:

  • Variables in the URL Scope
  • Variables in the FORM Scope
  • Some Variables in the CGI Scope
  • Cookies (Variables in the COOKIE Scope)
I wound up using pcregrep, since it was "good enough" for a functional proof-of-concept and offered easy multi-line matching support.  ColdFusion and CFML support various syntaxes and styles, with both tags and function statements.  These regexes have tended to work for me, though you may want to tweak them further if your code style is significantly different.  

I'll also mention that I am okay with a higher false-positive rate if it means a much lower false-negative rate.  I don't want to miss any security bugs that simple greps can find.  I'll happily spend some time reviewing potential false-positives if it means more thorough automated code coverage.  (For example, the regexes below treat the entire CGI scope as tainted, and don't account for the CGI variables that the user can't directly control.  And we'll also miss cases where input validation of user-controlled input actually happens elsewhere in the code.)  This process hopefully gives us a funneled data set, where we take a very large amount of information and are left with a smaller amount of manageable, actionable items after some human review.

With that long introduction, I offer the two one-liners below.  Consider the following contrived vulnerable code:










Sample usage below.  You'll need to change dangerousFunc and dangerousTag to the actual function or tag you want to search for, and /path/to/code/ to your actual source path.

% pcregrep --include=.{cfc,cfm} --color=always -Minr '(?s)(?<!\w)(dangerousFunc\()([^\)]*)(?<!\w)(url\.|form\.|cookie\.|cgi\.)([^\)]*)\)'  /path/to/code/



pcregrep  --color=always -Minr '(?s)<dangerousTag\s([^>]*)(?<!\w)(url\.|form\.|cookie\.|cgi\.)([^>]*)>'  /path/to/code/




Using these one-liners is still a somewhat manual process, since you need to check tags and functions individually, but they provide the foundation for something that can be expanded upon, in areas in such as:

  • Additional automation
  • Support for tainted variables (i.e., those derived from user-controlled input sources)
  • Regex cleanup
  • Dealing with de-duplication of findings
  • Permanent suppression of false-positive findings
  • Report generation
  • Integration with CI/CD pipelines
  • Integration with ticketing workflows
  • Enhancements with IAST (Interactive Application Security Testing) and runtime analysis
Though these items are left as a topic for another day, or an exercise for the reader is fixing all the bugs that you find. :)

Thursday, June 10, 2021

Stupid Unix Tricks - Using $IFS in Web Application Command Injection Vulnerabilities for Full RCE

Awhile ago I was testing a web application and found a command injection vulnerability.  The payload could be sent via an email address field, so something like:



User not found

Testing a little more, I was able to execute arbitrary commands and see the output with requests similar to:


which returned:

User uid=100(www-data) gid=101(www-data) groups=101(www-data) not found

The application did some validation on the email addresses, although the local (username) portion of the address could contain most characters.  But spaces weren't allowed, and I really wanted to be able to run commands that took an argument.  

The target system was running a Unix-like operating system.  Enter IFS - Internal Field Separators.  In simple terms, IFS is a built-in, special shell variable that's used as a delimiter to split input, such as a command and its arguments.  Long before web application attacks, IFS abuse was one (of many) reasons why setuid shell scripts are a terrible idea.

Using $IFS in place of <space>, I was now able to pass arguments to commands.  For example, a payload of:


would display the contents of the password file.  I wound up pulling a copy of netcat down to the host and firing off a remote shell via:


to continue local privilege escalation and exploitation.

Thursday, May 27, 2021

Bygone Vulnerabilities - Remote Code Execution in Oracle Reports 10g/11g

Looking back at old vulnerabilities can be both fun and useful.  Part history, part nostalgia, and still a healthy dose of understanding the technical innerworkings of some software or system.  I'm sure that George Santayana would agree.  I had planned to go into detail about a bygone vulnerability I found a long time ago in Oracle Reports, but for now this is just a teaser.

Several years ago, I performed an assessment that included an Oracle Reports server.  At the time, Oracle Reports had a number of known, high-impact security vulnerabilities.  Examples include a file overwrite vulnerability reported by Alexander Kornbrust  in 2005 and other vulnerabilities (such as CVE-2012-1734, CVE-2012-3152, and CVE-2012-3153) that can lead to arbitrary file reads and remote code execution, via the rwurl job type.

I wound up digging into Oracle Reports, and found a couple of new / non-public vulnerabilities.  These vulnerabilities led to arbitrary file reads and command execution, but did not use the rwurl job type, as that vulnerability had been fixed in my target environment.  I finished that assessment, did a little more product testing, and confirmed that my vulnerabilities could be exploited in Oracle 10gR2 and Oracle 11gR2 (other versions untested).  I also reported my findings to Oracle Product Security, and their response was (to paraphrase, if memory serves), that these vulnerabilities would not be exploitable in a properly-configured system set up in accordance with a customers-only configuration document.  Not being an Oracle expert or even ever having been even an Oracle administrator, my depth of what was common, typical, or recommended in Oracle environments was limited.  And so I didn't really pursue the issue further (nor did I come across other Oracle Reports servers in subsequent assessment target environments).

Extended Support for Oracle 11gR2 ended on December 31, 2020.  I had planned to blog in more detail about the vulnerabilities I found (many years after the fact), since the techniques to find and exploit older vulnerabilities can still be relevant and rewarding today.  However, since there are still systems online as of May 2021 that could be vulnerable, I'm going to wait for the time being.

With that said, if you are still running Oracle 10/11 Reports Server in 2021 or later, I'd recommend you take the following actions immediately:

  • Ensure that you are running a fully-supported Oracle Product, with the latest relevant security patches.  Like not Oracle 10g/11g.  Really.
  • Enforce authentication to the Oracle Reports Servlet by default
  • Validate and/or whitelist the expected values for report inputs and outputs
  • Limit the file creation/modification rights of the Report Servlet output
  • Review and enforce other relevant authentication, authorization, and file access control controls on the local system.

Friday, May 21, 2021

Stupid Unix Tricks - Escaping a Restricted Shell

Welcome to the first post of what may become a series - Stupid Unix Tricks.

I love stupid Unix tricks.  Even better if they can be used for something security-related.  This remains one of my favorite security advisories ever.  So it shouldn't be a surprise that I really enjoy security assessments that involve breaking out of a restricted shell.  They're a lot of fun, and restricted shells are extremely hard to get right in terms of security and prevention.  (I feel the same about kiosk escapes too, but that's a topic for another time.)

Years ago, I was doing a security assessment on a product.  The product details are unimportant, but it had a web interface and limited cli "administrator" access through ssh.  The server was racked in a remote datacenter -- not something that we could get easy physical access to -- so booting to single-user mode or other ways of examining the raw filesystem were out.  

In this instance, the restricted shell was used as guardrails on a controlled cli environment for customers.  I'm not sure if this was done to protect accidental customer error or to obscure the inner workings of the product.  The vendor did not provide root cli access to customers, so we were limited to a non-root service account.  The cli dropped the limited user into rbash and only had access to a handful of tools and scripts.  (No vi :!/bin/sh or tools with easy shell escapes, though.)  But some of scripts were setuid root -- which mean there was probably a way to exploit the environment.  After some testing, I was able to generate error messages that showed me the full names and paths of the commands I was running, and I got an understanding of the underlying filesystem.  

Poking around rbash some more, I realized I was able to read files by setting my HISTFILE environment variable to the target file, and then reviewing my history.  Combined with a knowledge of the filesystem, I could then grab the source for all of the setuid shell scripts. 

A quick code review later, and I had a command injection bug.  From there, it was easy work to create a uid 0 user with a standard shell, and get full access to the system.

I never did take the time to figure out if the HISTFILE/rbash technique was publicly known, but some quick searching today reveals at least some more recent discussions.

Sunday, April 25, 2021

Second post - a blog introduction

A new security blog. In 2021. Um...yeah. 

I’ve been working in information security for the past 20+ years.  These days, most of my focus is on application security, penetration testing, red teaming, and offense — although I have plenty of slowly-aging experience in incident response, security operations, network/security engineering, UNIX administration, and policy work too. 

A lot of this work has been for employers and clients, so there hasn’t been much that I’ve wanted to or have been able to blog about in the past.  But at this point I have a handful of topics I’m motivated enough to write about.  So stay tuned for some thoughts on application security testing and automation, CFML security, and maybe a look-back at some old vulnerabilities that are more-easily anonymized due to the cobwebs plus time. 

Get comfortable, hang around, and thanks for reading! 

Wednesday, April 21, 2021

SSRF in ColdFusion/CFML Tags and Functions

TL;DR: Several ColdFusion/CFML tags and functions can process URLs as file path arguments -- including some tags and and functions that you might not expect.  This can lead to Server-Side Request Forgery (SSRF) vulnerabilities in your code.  Developers should be sure to validate any user input passed to the affected tags and functions.


I recently observed some CFML tags and functions that could be used to perform Server-Side Request Forgery (SSRF), if they processed user-controlled input.  Based on this, I decided to do some fuzzing to identify all of the tags and functions that were potentially impacted by this type of attack.  There are many legitimate cases where applications need to process URLs and file paths.  And the security pitfalls of a few “dangerous” CFML tags and functions are well-known and well-documented.  However, there are other instances where the underlying functionality that leads to SSRF is unexpected, and user input is incorrectly assumed to be safe.  

Since I haven’t seen anything written about SSRF in CFML, I wanted to share some of my findings to help CFML developers secure their applications. Additionally, since there isn't a ColdFusion equivalent to something like PHP's allow_url_fopen (to prevent some functions from treating a URL as a valid file path) [1] [2], it's up to the developer to ensure that safe, validated input is passed to these tags and functions.

Some CFML Background

Maybe you’re not familiar with ColdFusion and CFML. (If you are, just skip ahead to the next section.) ColdFusion Markup Language (CFML) is a web application development language, first released in 1995.  Adobe now owns and maintains the original ColdFusion implementation, and there have been other commercial and open source implementations, including Lucee, Railo, and BlueDragon.  CFML use remains popular for both legacy applications and new development in organizations across healthcare, education, government, and various commercial industries.  Just ask Google and take a look at the 89 million+ results.

Server-side request forgery (SSRF)

Server-Side Request Forgery (SSRF) is a web application security vulnerability where an attacker is able to abuse functionality and make the application server request an arbitrary URL.   Some of the specifics can be application and language/platform dependent, but requests can typically be made for all supported URL schemes, such as http://, https://, ftp://, file:// and more.  An attacker can leverage SSRF to:

  • Make requests back to the server, including localhost-only services
  • Access internal hosts and services, including things like cloud metadata services
  • Access external hosts and services
  • Potentially send raw network requests

The techniques to turn an SSRF vulnerability into part of an exploit chain for a high-impact compromise are beyond the scope of this post, and will often depend on details in the affected application and target environment.  As a very simple example, consider an internal service that isn’t accessible from the public Internet.  An SSRF vulnerability within that environment may let an external attacker make requests to that internal service, breaking the security assumption that it should be inaccessible.  While this is only a high-level overview of SSRF, there’s lots more in-depth material available elsewhere -- such as here and here.  And for a ridiculously awesome look at some novel SSRF exploitation techniques, have a look at this presentation from Orange Tsai.


Some CFML tags and functions, by design, perform actions that could be dangerous or have security implications.  For example, most developers are aware that if you let a user specify the arguments to <cfexecute> or <filedelete> tags, this could have disastrous consequences. 

 But what about SSRF?  Any tag or function that processes and requests a URL as a parameter is potentially vulnerable to SSRF.  Some of these are obvious, such as <cfhttp>.  In the contrived example below, the user is able to control the URL that the cfhttp call will request.  And code like this should set off all kinds of security alarms for developers:

<cfhttp url="#url.requestURL#">

However, there are other tags and functions that will process and request URLs passed in parameters, where this functionality may be less obvious.  If any of these tags and functions consume user-controlled input in the affected parameters, an attacker will be able to perform SSRF.  Consider the code below:


/* Some file processing stuff */


mimeType = fileGetMimeType(form.file);

/* Do more stuff to validate the MIME type and process the file */



 The developer may be expecting form.file to contain an uploaded file object.  However, an attacker can pass a URL to fileGetMimeType() instead, and exploit SSRF.

Testing Results - Affected Tags and Functions

The following tags and functions can be vulnerable to SSRF, if they pass unvalidated user input into affected parameters.  These results are based on testing against Lucee and Adobe ColdFusion 2018.


































































 (* Lucee only)

 Avoiding These Types of SSRF Vulnerabilities in your CFML Code

Developers should make sure to validate any user-controlled input before they’re passed to any affected tags and functions.  If a URL is not expected input, or if following URLs is not intended behavior, additional validation logic should be added to prevent bad data or malicious activity.  For example, some functions will process both URLs/file paths and file objects as function arguments.   Validation in this case might enforce that only file objects are treated as valid input.  The specific techniques and logic to validate the user input may depend on the tag/function and necessary application functionality, and are beyond the scope of this post. 

Examples of user-controlled data can be any of the following:

  • Variables in the URL Scope
  • Variables in the FORM Scope
  • Some Variables in the CGI Scope
  • Cookies (Variables in the COOKIE Scope)
  • Secondary variables derived from URL, FORM, CGI, and Cookie Scopes


Bottom line -- make sure that you validate any user-controlled input passed to the tags and functions above.

[1] Update - Thanks to feedback from Brad Wood and Zac Spitzer, adding a note that various Resource providers (http, https, etc.) can be disabled in Lucee by commenting out the appropriate lines in lucee-server.xml.  I haven't tested this exhaustively, but it looks like this will prevent URLs with the disabled schemes (e.g., http://...) from being processed in some of these functions, but may still allow them in other functions.  

[2] Update - Adobe PSIRT has provided the following response:

"Thank you for the opportunity to review and respond to your blog post. Our ColdFusion engineering team has confirmed they leverage Apache Commons VFS in these tags/functions. This API provides a way to disable schemes like [[http://%5Dhttp:/]http://]http://, [[ftp://%5Dftp:/]ftp://]ftp://, ram:// etc by editing the file "org/apache/commons/vfs2/impl/providers.xml" within the commons-vfs jar file. It is strongly recommended for the ColdFusion developer to incorporate input validation in the supported schemes to prevent a risk of SSRF, even if certain schemes are disabled.

However, thanks to your research, our engineering team has determined it would be advantageous to make it easier for ColdFusion developers to disable schemes in an easier and intuitive way. Please keep an eye out for this change in a future release of ColdFusion."