CWE-113, HTTP Header Injection, 0x20, CRLF, Response Splitting, DORK Report

CWE-113: Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting')

Report generated by XSS.CX at Thu Mar 24 07:02:17 CDT 2011.

Public Domain Vulnerability Information, Security Articles, Vulnerability Reports, GHDB, DORK Search

XSS Crawler | SQLi Crawler | HTTPi Crawler | FI Crawler
Loading

1. HTTP header injection

2. Flash cross-domain policy

3. Silverlight cross-domain policy

4. Cookie without HttpOnly flag set

5. TRACE method is enabled

6. Robots.txt file



1. HTTP header injection  next

Summary

Severity:   High
Confidence:   Certain
Host:   http://uk.sitestat.com
Path:   /cliffordchance/cliffordchance/s

Issue detail

The value of REST URL parameter 3 is copied into the Location response header. The payload 4f9b3%0d%0aa748fffe1de was submitted in the REST URL parameter 3. This caused a response containing an injected HTTP header.

Issue background

HTTP header injection vulnerabilities arise when user-supplied data is copied into a response header in an unsafe way. If an attacker can inject newline characters into the header, then they can inject new HTTP headers and also, by injecting an empty line, break out of the headers into the message body and write arbitrary content into the application's response.

Various kinds of attack can be delivered via HTTP header injection vulnerabilities. Any attack that can be delivered via cross-site scripting can usually be delivered via header injection, because the attacker can construct a request which causes arbitrary JavaScript to appear within the response body. Further, it is sometimes possible to leverage header injection vulnerabilities to poison the cache of any proxy server via which users access the application. Here, an attacker sends a crafted request which results in a "split" response containing arbitrary content. If the proxy server can be manipulated to associate the injected response with another URL used within the application, then the attacker can perform a "stored" attack against this URL which will compromise other users who request that URL in future.

Issue remediation

If possible, applications should avoid copying user-controllable data into HTTP response headers. If this is unavoidable, then the data should be strictly validated to prevent header injection attacks. In most situations, it will be appropriate to allow only short alphanumeric strings to be copied into headers, and any other input should be rejected. At a minimum, input containing any characters with ASCII codes less than 0x20 should be rejected.

Request

GET /cliffordchance/cliffordchance/4f9b3%0d%0aa748fffe1de?content.cliffordchance.home.p&ns__t=1300814551350&ns_jspageurl=www.cliffordchance.com/home.html HTTP/1.1
Host: uk.sitestat.com
Proxy-Connection: keep-alive
Referer: http://www.cliffordchance.com/home.html
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.151 Safari/534.16
Accept: */*
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

Response

HTTP/1.1 302 Found
Date: Tue, 22 Mar 2011 17:22:00 GMT
Server: Apache
Expires: Sat, 01 Jan 2000 00:00:00 GMT
Pragma: no-cache
Cache-Control: no-cache
P3P: policyref="http://www.nedstat.com/w3c/p3p.xml", CP="NOI DSP COR NID PSA ADM OUR IND NAV COM"
Set-Cookie: s1=4D88DAB82B690246; expires=Sun, 20-Mar-2016 17:22:00 GMT; path=/cliffordchance/cliffordchance/
Location: http://uk.sitestat.com/cliffordchance/cliffordchance/4f9b3
a748fffe1de
?content.cliffordchance.home.p&ns_m2=yes&ns_setsiteck=4D88DAB82B690246&ns__t=1300814551350&ns_jspageurl=www.cliffordchance.com/home.html
Content-Length: 407
Connection: close
Content-Type: text/html; charset=iso-8859-1

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="http://uk.sitestat.com/cliffordchance/cliffordc
...[SNIP]...

2. Flash cross-domain policy  previous  next

Summary

Severity:   High
Confidence:   Certain
Host:   http://uk.sitestat.com
Path:   /crossdomain.xml

Issue detail

The application publishes a Flash cross-domain policy which allows access from any domain.

Allowing access from all domains means that any domain can perform two-way interaction with this application. Unless the application consists entirely of unprotected public content, this policy is likely to present a significant security risk.

Issue background

The Flash cross-domain policy controls whether Flash client components running on other domains can perform two-way interaction with the domain which publishes the policy. If another domain is allowed by the policy, then that domain can potentially attack users of the application. If a user is logged in to the application, and visits a domain allowed by the policy, then any malicious content running on that domain can potentially gain full access to the application within the security context of the logged in user.

Even if an allowed domain is not overtly malicious in itself, security vulnerabilities within that domain could potentially be leveraged by a third-party attacker to exploit the trust relationship and attack the application which allows access.

Issue remediation

You should review the domains which are allowed by the Flash cross-domain policy and determine whether it is appropriate for the application to fully trust both the intentions and security posture of those domains.

Request

GET /crossdomain.xml HTTP/1.0
Host: uk.sitestat.com

Response

HTTP/1.1 200 OK
Date: Tue, 22 Mar 2011 17:21:51 GMT
Server: Apache
Last-Modified: Mon, 24 Jan 2011 17:04:30 GMT
ETag: "530010-a7-49a9a97d80380"
Accept-Ranges: bytes
Content-Length: 167
Connection: close
Content-Type: text/xml

<cross-domain-policy>
<allow-access-from domain="*" secure="false" />
<allow-http-request-headers-from domain="*" headers="*" secure="false" />
</cross-domain-policy>

3. Silverlight cross-domain policy  previous  next

Summary

Severity:   High
Confidence:   Certain
Host:   http://uk.sitestat.com
Path:   /clientaccesspolicy.xml

Issue detail

The application publishes a Silverlight cross-domain policy which allows access from any domain.

Allowing access from all domains means that any domain can perform two-way interaction with this application. Unless the application consists entirely of unprotected public content, this policy is likely to present a significant security risk.

Issue background

The Silverlight cross-domain policy controls whether Silverlight client components running on other domains can perform two-way interaction with the domain which publishes the policy. If another domain is allowed by the policy, then that domain can potentially attack users of the application. If a user is logged in to the application, and visits a domain allowed by the policy, then any malicious content running on that domain can potentially gain full access to the application within the security context of the logged in user.

Even if an allowed domain is not overtly malicious in itself, security vulnerabilities within that domain could potentially be leveraged by a third-party attacker to exploit the trust relationship and attack the application which allows access.

Issue remediation

You should review the domains which are allowed by the Silverlight cross-domain policy and determine whether it is appropriate for the application to fully trust both the intentions and security posture of those domains.

Request

GET /clientaccesspolicy.xml HTTP/1.0
Host: uk.sitestat.com

Response

HTTP/1.1 200 OK
Date: Tue, 22 Mar 2011 17:21:51 GMT
Server: Apache
Last-Modified: Mon, 24 Jan 2011 17:04:30 GMT
ETag: "4df000f-137-49a9a97d80380"
Accept-Ranges: bytes
Content-Length: 311
Connection: close
Content-Type: text/xml

<?xml version="1.0" encoding="utf-8"?>
<access-policy>
<cross-domain-access>
<policy>
<allow-from http-request-headers="*">
<domain uri="*"/>
</allow-from>
<grant-to>
<reso
...[SNIP]...

4. Cookie without HttpOnly flag set  previous  next

Summary

Severity:   Information
Confidence:   Certain
Host:   http://uk.sitestat.com
Path:   /cliffordchance/cliffordchance/s

Issue detail

The following cookie was issued by the application and does not have the HttpOnly flag set:The cookie does not appear to contain a session token, which may reduce the risk associated with this issue. You should review the contents of the cookie to determine its function.

Issue background

If the HttpOnly attribute is set on a cookie, then the cookie's value cannot be read or set by client-side JavaScript. This measure can prevent certain client-side attacks, such as cross-site scripting, from trivially capturing the cookie's value via an injected script.

Issue remediation

There is usually no good reason not to set the HttpOnly flag on all cookies. Unless you specifically require legitimate client-side scripts within your application to read or set a cookie's value, you should set the HttpOnly flag by including this attribute within the relevant Set-cookie directive.

You should be aware that the restrictions imposed by the HttpOnly flag can potentially be circumvented in some circumstances, and that numerous other serious attacks can be delivered by client-side script injection, aside from simple cookie stealing.

Request

GET /cliffordchance/cliffordchance/s?content.cliffordchance.home.p&ns__t=1300814551350&ns_jspageurl=www.cliffordchance.com/home.html HTTP/1.1
Host: uk.sitestat.com
Proxy-Connection: keep-alive
Referer: http://www.cliffordchance.com/home.html
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.151 Safari/534.16
Accept: */*
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

Response

HTTP/1.1 302 Found
Date: Tue, 22 Mar 2011 17:21:50 GMT
Server: Apache
Expires: Sat, 01 Jan 2000 00:00:00 GMT
Pragma: no-cache
Cache-Control: no-cache
P3P: policyref="http://www.nedstat.com/w3c/p3p.xml", CP="NOI DSP COR NID PSA ADM OUR IND NAV COM"
Set-Cookie: s1=4D88DAAE2D30007E; expires=Sun, 20-Mar-2016 17:21:50 GMT; path=/cliffordchance/cliffordchance/
Location: http://uk.sitestat.com/cliffordchance/cliffordchance/s?content.cliffordchance.home.p&ns_m2=yes&ns_setsiteck=4D88DAAE2D30007E&ns__t=1300814551350&ns_jspageurl=www.cliffordchance.com/home.html
Content-Length: 390
Connection: close
Content-Type: text/html; charset=iso-8859-1

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="http://uk.sitestat.com/cliffordchance/cliffordc
...[SNIP]...

5. TRACE method is enabled  previous  next

Summary

Severity:   Information
Confidence:   Certain
Host:   http://uk.sitestat.com
Path:   /

Issue description

The TRACE method is designed for diagnostic purposes. If enabled, the web server will respond to requests which use the TRACE method by echoing in its response the exact request which was received.

Although this behaviour is apparently harmless in itself, it can sometimes be leveraged to support attacks against other application users. If an attacker can find a way of causing a user to make a TRACE request, and can retrieve the response to that request, then the attacker will be able to capture any sensitive data which is included in the request by the user's browser, for example session cookies or credentials for platform-level authentication. This may exacerbate the impact of other vulnerabilities, such as cross-site scripting.

Issue remediation

The TRACE method should be disabled on the web server.

Request

TRACE / HTTP/1.0
Host: uk.sitestat.com
Cookie: bf6cb69b3af78b0d

Response

HTTP/1.1 200 OK
Date: Tue, 22 Mar 2011 17:21:51 GMT
Server: Apache
Connection: close
Content-Type: message/http

TRACE / HTTP/1.0
Host: uk.sitestat.com
Cookie: bf6cb69b3af78b0d


6. Robots.txt file  previous

Summary

Severity:   Information
Confidence:   Certain
Host:   http://uk.sitestat.com
Path:   /cliffordchance/cliffordchance/s

Issue detail

The web server contains a robots.txt file.

Issue background

The file robots.txt is used to give instructions to web robots, such as search engine crawlers, about locations within the web site which robots are allowed, or not allowed, to crawl and index.

The presence of the robots.txt does not in itself present any kind of security vulnerability. However, it is often used to identify restricted or private areas of a site's contents. The information in the file may therefore help an attacker to map out the site's contents, especially if some of the locations identified are not linked from elsewhere in the site. If the application relies on robots.txt to protect access to these areas, and does not enforce proper access control over them, then this presents a serious vulnerability.

Issue remediation

The robots.txt file is not itself a security threat, and its correct use can represent good practice for non-security reasons. You should not assume that all web robots will honour the file's instructions. Rather, assume that attackers will pay close attention to any locations identified in the file. Do not rely on robots.txt to provide any kind of protection over unauthorised access.

Request

GET /robots.txt HTTP/1.0
Host: uk.sitestat.com

Response

HTTP/1.1 200 OK
Date: Tue, 22 Mar 2011 17:21:51 GMT
Server: Apache
Last-Modified: Mon, 24 Jan 2011 17:04:30 GMT
ETag: "9b0014-1c-49a9a97d80380"
Accept-Ranges: bytes
Content-Length: 28
Connection: close
Content-Type: text/plain; charset=UTF-8

User-agent: *
Disallow: /

Report generated by XSS.CX at Thu Mar 24 07:02:17 CDT 2011.