Had a very strange issue today. We have a WCF-service based system we're deploying on a client's network. One of the services uses Basic Authentication for normal calls, which is handled via a custom HttpModule. However, one specific subfolder of the WCF service (/downloads/) we wanted to have only anonymous auth so that files could be downloaded without a password.
It seemed like it should be relatively straightforward. I modified the logic in the Basic Auth module to skip authentication step for any path starting with /downloads/. It worked beautifully in our testing environment. However, the problems began when we moved the code into our client's network. Every time I would try to access a url containing /downloads/, I would incorrectly get the Basic Auth prompt, even though it was supposed to be exempted.
In an attempt to debug the issue, I commented out the Basic Auth module completely from web.config so the website would use Anonymous Auth globally. However, when I tried to access any path in the service from a web browser, it would generate a 401.3 error, which is a physical ACL access denied error. It didn't make any sense that the account should be disabled from access because the identity pool account for the IIS website had full permissions to the folder containing the service files.
After doing a little research I discovered that the account used by default for anonymous auth is specified separately from the account for the identity pool. Even if you specify in the global Website Basic Settings that the "Connect As" should be Pass-though (Application Pool Identity), that is separate from the setting for Anonymous Auth. Turns out if you right-click on the Anonymous Auth setting in an IIS site, you can specify the account used for requests made as Anonymous, and by default the account is IUSR. We changed this to use the application pool identity and it started working beautifully.
However, this leaves me somewhat puzzled as to how the Basic Auth account was working when the Anonymous Auth is not working. The Basic Auth accounts are tied to a database, which is entirely segregated from the ACL-level permissions in windows, which are tied to Active Directory in the network. Apparently, it seems that just by fact of using Basic Auth with any account, it uses the Application Pool Identity, but if you have no username at all, then it assumes the Anonymous Auth default user setting - regardless of whether your Basic Auth username has anything to do with the network. Very unexpected behavior, and very frustrating to debug.
Happy programming!
Friday, August 9, 2013
A Debugging Nightmare
A few weeks ago, I ran into the most complicated bug I think I have ever had to solve in my entire programming career.
We are developing a fairly complex system that involves 4 WCF services, a SQL database, and an integration component to a Microsoft Access front-end application. The bulk of the system involves synchronizing data between the Microsoft Access database and the SQL database via XML files. The system was largely developed by a 3rd-party contractor who came in-house on a particular day so that we could work together to try to resolve the issue.
The basic problem was that initiating the sync would work fine when we manually started it via a stored procedure in SQL server, but when doing an end-to-end process from Microsoft Access, it failed every time. The two calls should have been 100% identical, because we were manually calling the stored procedure that eventually gets called by Microsoft Access. We could even demonstrate through our logging that the exact same stored procedure was being called in both cases with the same parameters, but it would only work when manually run.
We traced the calls through using a combination of database logging, text file logging, and Fiddler tracing to try and see what was going on. There was nothing we could see different about the two requests and no clear reason why it would fail, until suddenly we stumbled on a clue. When running the end-to-end test from Microsoft Access, it would fail after 30 seconds with a timeout error message.
At first, we thought one of the WCF services was timing out because the error message looked exactly like a web service timeout. But eventually with the help of Fiddler (which I *highly* recommend btw!) it was clear the error message came from the server-side via a FaultException, not from the client-side. So the error was occurring inside the service. Eventually, I pinpointed it down to a single database call that was generating a timeout error, but only when done through the end-to-end client call.
It wasn't until I pulled out the stacktrace for the FaultException and tracked down the exact line with the error that I had the "aha!" moment. Turns out, the real problem was a timeout caused by the process running in a transaction. There is a record modified right before the stored procedure is called, which then calls .NET code that tries to modify the same record, but the table has been locked. Apparently, the .NET code called by a stored procedure is actually a separate session from the caller, and so they get into deadlock situation. Once I saw the error message, it was immediately obvious what the problem was. I simply removed the conflicting update statements and now it works perfectly.
Happy programming!
We are developing a fairly complex system that involves 4 WCF services, a SQL database, and an integration component to a Microsoft Access front-end application. The bulk of the system involves synchronizing data between the Microsoft Access database and the SQL database via XML files. The system was largely developed by a 3rd-party contractor who came in-house on a particular day so that we could work together to try to resolve the issue.
The basic problem was that initiating the sync would work fine when we manually started it via a stored procedure in SQL server, but when doing an end-to-end process from Microsoft Access, it failed every time. The two calls should have been 100% identical, because we were manually calling the stored procedure that eventually gets called by Microsoft Access. We could even demonstrate through our logging that the exact same stored procedure was being called in both cases with the same parameters, but it would only work when manually run.
We traced the calls through using a combination of database logging, text file logging, and Fiddler tracing to try and see what was going on. There was nothing we could see different about the two requests and no clear reason why it would fail, until suddenly we stumbled on a clue. When running the end-to-end test from Microsoft Access, it would fail after 30 seconds with a timeout error message.
At first, we thought one of the WCF services was timing out because the error message looked exactly like a web service timeout. But eventually with the help of Fiddler (which I *highly* recommend btw!) it was clear the error message came from the server-side via a FaultException, not from the client-side. So the error was occurring inside the service. Eventually, I pinpointed it down to a single database call that was generating a timeout error, but only when done through the end-to-end client call.
It wasn't until I pulled out the stacktrace for the FaultException and tracked down the exact line with the error that I had the "aha!" moment. Turns out, the real problem was a timeout caused by the process running in a transaction. There is a record modified right before the stored procedure is called, which then calls .NET code that tries to modify the same record, but the table has been locked. Apparently, the .NET code called by a stored procedure is actually a separate session from the caller, and so they get into deadlock situation. Once I saw the error message, it was immediately obvious what the problem was. I simply removed the conflicting update statements and now it works perfectly.
Happy programming!
Monday, July 29, 2013
Middle-Click Conundrum
So today I am once again reminded how stubbornly frustrating computer software can be when it goes awry. It seems like on a daily basis I am confounded by some new unexpected behavior, an expectation mismatch, or an obscure error message that I have not yet seen before.
Today it is a middle mouse-button problem. This isn't the first time I've seen it, but I think I first noticed a couple weeks ago on Slashdot. I have a habit of using middle-click to open a link into a new tab, but recently, in Slashdot articles middle-clicking both opens a new tab *and* navigates in the current tab. Quite annoying!
Doing a little research I found (as usual) that I'm not the only one plagued by this behavior. According to this SuperUser post, it has to do with links that use onclick events for navigation. Sure enough, when I examine the link in Chrome Dev Tools, there is indeed an onclick event in the anchor that performs navigation.
The fix is simple enough. Install this chrome extension, and the problem goes away. Works for me!
Today it is a middle mouse-button problem. This isn't the first time I've seen it, but I think I first noticed a couple weeks ago on Slashdot. I have a habit of using middle-click to open a link into a new tab, but recently, in Slashdot articles middle-clicking both opens a new tab *and* navigates in the current tab. Quite annoying!
Doing a little research I found (as usual) that I'm not the only one plagued by this behavior. According to this SuperUser post, it has to do with links that use onclick events for navigation. Sure enough, when I examine the link in Chrome Dev Tools, there is indeed an onclick event in the anchor that performs navigation.
The fix is simple enough. Install this chrome extension, and the problem goes away. Works for me!
Tuesday, January 15, 2013
Rewrite DNS Request with Fiddler
Today one of my colleagues had a situation where they needed a piece of software to communicate with a test webservice. However, we could not find a way to configure the software to use a custom webservice url. So enters Fiddler into the picture. When you want a url, say sub.domain.com, to end up sending the request to sub-test.domain.com, you can use a custom rule in Fiddler. This is at a higher level than the HOSTS file, by actually re-writing the request to a different domain name before the DNS translation and request are even made - just changing the DNS routing with HOSTS file may not work because of host header rules on the server side.
To accomplish this, open Fiddler, and navigate to the Rules menu -> Customize Rules. It should open the file customRules.js in your default editor (probably Notepad). You'll see that this file contains some default rules out-of-the-box. Also, there are some example rules given in comments that allow you to highlight certain requests, or modify any part of the request/response cycle, which of itself is a very powerful feature.
We are interested in adding a rule to OnBeforeRequest. Find the function call for OnBeforeRequest in about the middle of the file, and drop down to the end of the method. We will add a rule like this:
This says that when the domain name "sub.domain.com" is encountered in a request, then the request hostname should be rewritten to "sub-test.domain.com". Since this occurs in the OnBeforeRequest event, the rewriting occurs even before the request is made, so the DNS translation applies to the new domain name instead. The server can't tell the difference, and thinks it's just a request for sub-test.domain.com, which makes for a very clean redirect. Then the response comes back into Fiddler, and back to the calling software, which doesn't know that its request was rewritten.
Hope this helps. Happy programming! :)
To accomplish this, open Fiddler, and navigate to the Rules menu -> Customize Rules. It should open the file customRules.js in your default editor (probably Notepad). You'll see that this file contains some default rules out-of-the-box. Also, there are some example rules given in comments that allow you to highlight certain requests, or modify any part of the request/response cycle, which of itself is a very powerful feature.
We are interested in adding a rule to OnBeforeRequest. Find the function call for OnBeforeRequest in about the middle of the file, and drop down to the end of the method. We will add a rule like this:
if (oSession.HostnameIs("sub.domain.com")) { oSession.hostname = "sub-test.domain.com"; }
This says that when the domain name "sub.domain.com" is encountered in a request, then the request hostname should be rewritten to "sub-test.domain.com". Since this occurs in the OnBeforeRequest event, the rewriting occurs even before the request is made, so the DNS translation applies to the new domain name instead. The server can't tell the difference, and thinks it's just a request for sub-test.domain.com, which makes for a very clean redirect. Then the response comes back into Fiddler, and back to the calling software, which doesn't know that its request was rewritten.
Hope this helps. Happy programming! :)
Monday, January 14, 2013
Multi-Hosting with Apache
As David Wheeler once famously said, "All problems in computer science can be solved by another level of indirection." In this case, the indirection is a web server router. In developing an Apache/Tomcat application, I've often needed to run multiple separate servers for Dev, QA, and staging environments at the same time. Since I want them all to be easily accessible, I want separate, user-friendly url's all hosted on port 80. How could you solve a problem like this?
The obvious solution is to run a single Tomcat server with multiple server entries for the separate urls. However, this ties up port 80 to a single application, and I lose the isolation of the systems that is representative of the production environment. The solution? Add another layer of abstraction, via an Apache server that simply does routing. Download and install a bare-bones Apache service, then add entries like the following to proxy the requests (httpd.conf configuration file):
You will also need to turn on the following modules for this to work (uncomment in modules section of httpd.conf - you don't need ssl_module if you're not going to use the SSL section):
The proxying logic above also makes the internal server think it has the url "internal.domain.com" rather than "sub.domain.com" so you can test multiple servers with the same external url but allow access via a custom url. This is a very powerful solution that allows you to point anything just about anywhere. In fact, you can move internal servers around, without needing to make configuration changes, by simply updating the routing Apache config (and opening the proper ports in the firewall if you are routing to another machine). I've used it extensively to manage a collection of 5 nearly-identical Apache/Tomcat servers.
The obvious solution is to run a single Tomcat server with multiple server entries for the separate urls. However, this ties up port 80 to a single application, and I lose the isolation of the systems that is representative of the production environment. The solution? Add another layer of abstraction, via an Apache server that simply does routing. Download and install a bare-bones Apache service, then add entries like the following to proxy the requests (httpd.conf configuration file):
<VirtualHost *:80> ServerName sub.domain.com ProxyPass / http://internal-ip:port/ ProxyPassReverse / http://internal-ip:port/ # ==== UPDATE HEADERS FOR FILTERING ==== Header edit Location http://internal.domain.com/(.*) http://sub.domain.com/$1 # ==== SETUP SUBSTITUTION CONTENT TYPES ==== AddOutputFilterByType SUBSTITUTE text/html AddOutputFilterByType SUBSTITUTE text/css AddOutputFilterByType SUBSTITUTE text/javascript # ==== APPLY URL RENAME SUBSTITUTIONS TO LOCALHOST ==== FilterChain replace Substitute "s|internal.domain.com|sub.domain.com|ni" </VirtualHost> <VirtualHost *:443> ServerName sub.domain.com # ==== HANDLE SSL REQUESTS ==== SSLEngine On SSLProxyEngine On SSLCertificateFile "/path/to/cer" SSLCertificateKeyFile "/path/to/key" # ==== PERFORM PROXYING TO LOCAL SERVER ==== ProxyPass / https://internal-ip:ssl-port/ ProxyPassReverse / https://internal-ip:ssl-port/ # ==== UPDATE HEADERS FOR FILTERING ==== Header edit Location https://internal.domain.com/(.*) https://sub.domain.com/$1 # ==== SETUP SUBSTITUTION CONTENT TYPES ==== AddOutputFilterByType SUBSTITUTE text/html AddOutputFilterByType SUBSTITUTE text/css AddOutputFilterByType SUBSTITUTE text/javascript # ==== APPLY URL RENAME SUBSTITUTIONS TO LOCALHOST ==== FilterChain replace Substitute "s|internal.domain.com|sub.domain.com|ni" </VirtualHost>
You will also need to turn on the following modules for this to work (uncomment in modules section of httpd.conf - you don't need ssl_module if you're not going to use the SSL section):
- filter_module
- proxy_module
- proxy_connect_module
- proxy_ftp_module
- proxy_html_module
- proxy_http_module
- ssl_module
- xml2enc_module
The proxying logic above also makes the internal server think it has the url "internal.domain.com" rather than "sub.domain.com" so you can test multiple servers with the same external url but allow access via a custom url. This is a very powerful solution that allows you to point anything just about anywhere. In fact, you can move internal servers around, without needing to make configuration changes, by simply updating the routing Apache config (and opening the proper ports in the firewall if you are routing to another machine). I've used it extensively to manage a collection of 5 nearly-identical Apache/Tomcat servers.
Thursday, January 10, 2013
C# - bringing implicit-defined variables out of scope
I have a habit of using implicit-typed declarations for everything in my code, because I think it makes things clean, simple, and easier to maintain. It also has the side effect of causing all your variable declarations to be short and line up because they all start with "var".
If you're like me and use implicit typing, you may have encountered the issue where an implicitly-typed variable needs to be brought out of an inner scope into an outer scope. For example, suppose you are requesting some data from a database and you want to handle any Exceptions. Naturally, you wrap the data in a try...catch call:
Now suppose you want to use theData later in the code. How do you reference it? You have to define theData outside of the try..catch, and initialize it to a null value. Since you can't implicit-type a value of null, this requires you to explicitly define the type for the variable. So much for implicit-typed declarations!
There is a clever workaround to this situation by making use of generics and lambda expressions. The compiler is usually able to determine the proper type of any lambda expression if it's a simple return value and method call. Supposing that GetData returns a type of IQueryable<Company>, say, the following lambda expression clearly has a type of Func<IQueryable<Company>>:
Now that we've wrapped the code into a Func, it can be passed into a generic method that does nothing more than call the func and send the return value back to the caller.
It would seem like this is a pointless process, just to pass a function into another method which calls the passed function. However, since the compiler is able to determine the return type of the inner function, it can also determine the generic type T. Therefore the result can be assigned to an implicitly-typed variable, which no longer has the scoping issue since the entire inner scope was passed in and handled by another method:
Notice that I did not have to define the type anywhere in the above code, and I'm able to pull the value out of the try..catch into variable theData. I've found this method so useful that it is a static utility method in all of my projects. You can also use this for other things besides try..catch, because it's flexible enough to accommodate any scenario:
If you're like me and use implicit typing, you may have encountered the issue where an implicitly-typed variable needs to be brought out of an inner scope into an outer scope. For example, suppose you are requesting some data from a database and you want to handle any Exceptions. Naturally, you wrap the data in a try...catch call:
try { var theData = GetData(); } catch (Exception ex) { // handle exception } // do something with theData
Now suppose you want to use theData later in the code. How do you reference it? You have to define theData outside of the try..catch, and initialize it to a null value. Since you can't implicit-type a value of null, this requires you to explicitly define the type for the variable. So much for implicit-typed declarations!
There is a clever workaround to this situation by making use of generics and lambda expressions. The compiler is usually able to determine the proper type of any lambda expression if it's a simple return value and method call. Supposing that GetData returns a type of IQueryable<Company>, say, the following lambda expression clearly has a type of Func<IQueryable<Company>>:
() => { try { return GetData(); } catch (Exception ex) { // handle exception return null; } };
Now that we've wrapped the code into a Func, it can be passed into a generic method that does nothing more than call the func and send the return value back to the caller.
T ListScope<T>(Func<T> ScopedFunction) { T result = ScopedFunction(); return result; }
It would seem like this is a pointless process, just to pass a function into another method which calls the passed function. However, since the compiler is able to determine the return type of the inner function, it can also determine the generic type T. Therefore the result can be assigned to an implicitly-typed variable, which no longer has the scoping issue since the entire inner scope was passed in and handled by another method:
var theData = ListScope(() => { try { return GetData(); } catch (Exception ex) { // handle exception return null; } };
Notice that I did not have to define the type anywhere in the above code, and I'm able to pull the value out of the try..catch into variable theData. I've found this method so useful that it is a static utility method in all of my projects. You can also use this for other things besides try..catch, because it's flexible enough to accommodate any scenario:
// if statement var result = LiftScope(() => { if (a == b) return GetData(true); else if (b == c) return GetData(false); else return GetData(true, 2); }); // using statement var result = LiftScope(() => { using (var myContext = new MyDataContext()) { return myContext.MyTable.Where(w => w.A == B).ToList(); } });Happy programming!
Wednesday, January 9, 2013
Surface RT Jailbreak
The wonderful people at XDA Developers forum have successfully unlocked the Windows Surface RT. I have watched the development since Christmas after I got a new Surface. This week the jailbreak was successful, so I dug in and followed the instructions (here's a blog post by the exploit's author explaining a little more about the hack). After a couple of tries, and a couple of BSOD's, I have a fully unlocked Surface!
I immediately started downloading the sample ARM compiled apps on the forum to run them. Since the Surface comes with .NET 4.5 built for ARM, any simple .NET app should work out-of-the-box. To date, the following applications have been built for ARM and successfully executed on an unlocked Windows Surface RT (links go to the post with a downloadable ARM executable):
I immediately started downloading the sample ARM compiled apps on the forum to run them. Since the Surface comes with .NET 4.5 built for ARM, any simple .NET app should work out-of-the-box. To date, the following applications have been built for ARM and successfully executed on an unlocked Windows Surface RT (links go to the post with a downloadable ARM executable):
- Tight-VNC - Remote administration server/client
- Putty - SSH/Telnet/etc. client
- Bochs - x86 emulator
- 7-zip - powerful zip/unzip utility
Some notes from my own experience if you are going to give this a try:
- Use the new app published by netham45 instead of building app1 for yourself.
- Don't forget to run the runExploit.bat file as administrator, or it will appear to succeed but not actually work.
- When the batch script asks you to press volume down, then enter, make sure to do both quickly, or you will likely experience a BSOD.
I'm excited and looking forward to a polished exploit, and a growing library of ARM-ready apps :)
Subscribe to:
Posts (Atom)