Dot (.) Does Matter for Cross platform Application
Recently, my network software encountered strange security issues. The vulnerability testing software showed me that it can gain access to protected resources without authorization.
Access Protected Resource
I have a protected resource:
If you try to access “https://ip/secured/neverseen.txt”, it will redirect to authorization page. Everything looks fine.
When you try to access “https://ip/./secured/neverseen.txt”, it is still okay. According the RFC 3986, the “.” will be normalized so that client will send “https://ip/secured/neverseen.txt”, too.
Of course, the web access shouldn’t depend on client side to sent standard URI request. I try to use “curl” to test the issue again.
Test with curl Tool
My web handler receive: “ip/secured/neverseen.txt”.
Okay, curl will normalize the Dot (.).
Try use URL encode command:
My simple web handler fails to be triggered. It doesn’t think
“ip/./secured/neverseen.txt” are the same string.
Now, I refer to RFC 3986 to standardize URIs before any of my URI handlers. Thereafter, all request URIs are the same, and these 2 strings will correctly trigger the same handler.
Test It Again!
I run the vulnerability test again. Unfortunately, it still reports errors. They can download my “neverseen.txt” file directly without authorization.
Oops !!! Why ?
I need to check the logs in the exploit tool and understand why they can download secure files?
It’s weird. I expect it will be HTTP 404 Error.
Linux vs Windows
This is filesystem related problem between “Windows” and “Unix/Linux”.
On Windows, “Myfolder.” and “Myfolder” means the same folder.
However, Unix / Linux considers them to be different files.
You can try to use Windows file explorer to create “abc” and “abc.” folders. It will show that the file already exists.
I added a rule to convert “folder.” to “folder” in my URI normalization tool, and it finally passed the vulnerability test!
This is really a simple but completely unexpected problem.