Sunday, October 1, 2017

Ready, Fire, Aim!

Debugging Isn't Guesswork

I had a recent encounter with a team member that reignited a continuing source of aggravation for me when dealing with less assiduous developers. We were pair-programming, attempting to debug some VBA code that the other person had written. I was at the keyboard with my counterpart looking over my shoulder, offering advice.
"You know what it might be, it might be <insert random code location here>."
"Oh wait, I know what it is. It's <insert random code location here>."
"Hmmm. Take a look at <insert random code location here>."
Let's set aside the fact that it was incredibly frustrating to jump from spot to spot in code that I'm unfamiliar with, while simultaneously trying to understand the overall logic, but none of the suggestions panned out. Which led me to growl (at a less than acceptable volume),
"Let's stop guessing and do the damned analysis." (I'm not always known for my tact.)
To me, that's what debugging is - analysis. But I see far too many developers, including some of the most experienced, that view debugging as total guesswork. It's the coding equivalent of throwing dung against a wall to see what sticks. And here's the real danger with that approach. Sometimes, your random guess appears to have corrected the problem, but actually hasn't. Because you didn't analyze and determine the exact cause, you may have just masked the original defect with another defect.
Please stop debugging via guesswork. Please.

Aim, Then Fire

My recommendation is to Identify-Locate-Kill:
  1. Identify and focus on what you're trying to achieve. This sounds straightforward, but we often lose sight of the goal. While searching for an off-by-one defect, have you found several other unrelated issues and then gotten sidetracked correcting them? Stay focused on the original intent. Make note of the other issues and come back to them later. Sometimes, an informed guess by someone that knows the code can lead you to the general area, where you can begin your analysis, but don't just blindly assume that this is definitively the location of the defect.
  2. Use analysis to locate the precise location of the defect. This is the crucial step. Don't skip past performing a step-by-step trace of the code to watch the values of variables and check the order of code execution. Modern debugging tools make this far simpler than when I first started coding, way back in the mists of time. However, sometimes the act of attempting to observe an issue makes the issue difficult to duplicate. In these situations don't forget the power of simple logging statements in code - sometimes it's the only way to locate a gnarly defect.
  3. Kill the bug with a well considered correction. This can take several forms. Sometimes it's a deletion of code, rather than an addition. Sometimes it a re-ordering of a few lines. And in other situations, you need a complete code refactor. Discuss the approach with other developers that you trust, preferably ones that know the codebase, to ensure that your correction doesn't have repercussions. Test your modifications by hand to ensure minimum viable correct operation.
Not indicated here is the need for overall application testing to ensure no regressions have been introduced. However, I don't consider that part of debugging - unit tests should be a regular, repeating step in your development cycle.

I Can Get Some...Satisfaction

A lot of developers groan at the thought of debugging, especially correcting someone else's code. While I also prefer the pleasure of greenfield development, debugging is unavoidable. And when I know I've located and positively corrected a code defect, I experience a profound sense of satisfaction. Part of that is because debugging is hard. It can be considerably more difficult than coding a new application or feature, especially if you're trying to decipher some other developer's thought process - or lack thereof.
So please, analyze, don't guess.

Wednesday, January 6, 2016

The Sluggish Three Finger Salute Returns!

Not So Super(fetch)

In my previous post I discussed my aggravation with a modification Microsoft has made in Windows 8 and 10 that causes some Metro apps to become non-responsive to certain Windows messages, specifically the WM_GETHOTKEY message that queries apps for their operating system shortcut key. My post provided some code that would help identify which applications were non-responsive so that you could kill their process and set their configuration to prohibit background loading. Initially this worked well for me — I identified the processes, shut them down, and my hotkeys worked properly again. All was well.

Except it didn't last. I leave my workstation powered up for evening backups and when I hit my trusty Ctrl-Alt-U combination to start Ultraedit the next morning the shortcut key was back to a three second delay! Grrrrrrr! I fired up my utility and was surprised to find that Calculator, which I knew I had killed the evening before, was back again. Huh? How did that happen?

So once again I put on my Fenton Hardy hat (yes, I'm showing my age) and tried to find out how this was happening. What I determined was that the Windows Superfetch service, which is essentially a disk caching mechanism, was pre-loading certain applications that it feels are frequently used. It loads them into RAM and places them in a hibernated state. The applications take up RAM but I have a 16GB workstation so that's not an issue. The idea is that when I invoke the application it doesn't have to be loaded from disk again.

In theory this is a good thing. Back in the bad old DOS days, and early versions of Windows, disk caching programs were a great way to speed up operations on frequently used programs. But in this day of SSD drives I'm not certain it's necessary. However, as I dove deeper it seemed that Superfetch is something that shouldn't be disabled without thought. But there doesn't seem to be a way to exclude certain apps from being preloaded. Each time I closed Calculator it would reappear after a period of time and my hotkeys got sluggish again. A pox on you Superfetch!

So once again being enamored with a quick-and-dirty solution I decided to code a little system tray application that would periodically scan memory for the offending apps and, if they don't respond to the WM_GETHOTKEY message, kill them mercilessly. Granted, it's a clunky way to go about this but there seems to be no way to prohibit these applications from restarting. Microsoft, please make these hibernated apps respond to this message or let us exclude some applications from the Superfetch service. Please!

So here's how this works. In Visual Studio create a new Windows Forms project. Then delete the default Form1 that is created by Visual Studio. This will cause a "not found" error in the following line of code in Program.cs:
You can safely delete this line of code. We won't be using it.

We'll need a couple of new classes before we put new startup code in the Program.cs file. First, we need code to encapsulate the system tray support. Please note: I adapted this code from something I found on the Internet a while back but I didn't save the reference and I can't for the life of me remember where I found it. My apologies to the original author.

First we'll need a class to encapsulate the popup context menu for the system tray icon, so we can exit the program.
Then we need a class to handle the icon and tooltip display.
In the Program.cs file, we initialize the system tray support then set a timer to periodically scan for offending applications. I created a custom configuration class to be able to add to the bad applications list, which is not shown in this post. (For the record, the apps that were unresponsive for me that kept being reloaded were Calculator, Settings, and Movies & TV). You can create your list however you see fit.

A lot of the Windows API specific code is taken from my previous post, including the task spawning. I guess I could have recoded it to be less complex but it was easier to just drop it in from the other application.
While this approach is less than elegant the code appears to work nicely. I have the scan set for every three minutes, and it only runs for a few milliseconds per window, except for the offending apps which take a full three seconds. So there is a potential time window where an offending app may be resident when I press a hotkey. But in practice it hasn't happened yet so I appear to have alleviated 99.99% of the issue. Crude but effective.

Thursday, December 31, 2015

The Sluggish Three Finger Salute

Not So Shortcuts

Like many of you I recently upgraded both my portable and my primary workstation to Windows 10. Unlike some of you I did this willingly, having heard that this release of Windows was significantly improved from v8.1 (which I did not install) and because I do like to stay on top of the platform for my clients that will be using it.

Shortly after installing the upgrade, and having gone through some painful changes and learning curves, I noticed that my Windows shortcut keys were responding very sluggishly. As a developer and keyboard enthusiast I have quite a number of shortcut keys defined at the OS level. For example Ctrl-Alt-U will pop up my trust copy of UltraEdit, Ctrl-Alt-Q will fire up SQL Server Management Studio, Ctrl-Alt-X for Excel. You get the picture.

I hadn't seen this sluggish shortcut behavior in Windows 7 and it was immensely frustrating. Almost all my applications are on SSD drives and I'm running a quad-core i7 CPU so they should be flying up on the screen! It had to be something OS-specific. So I employed a little Google-Fu to determine the cause of the problem.

My research uncovered a fair amount about how the shortcut key system operates and how Window 8 and 10 Metro apps behave. This question and answer on SuperUser gives a concise synopsis of the root problem, which is that some Windows processes don't behave well, especially newer "Metro" apps. When closing these apps they remain memory-resident and eventually become "tombstoned", which apparently means they are taking up memory but not responding to many Windows messages. I assume they respond to a request to be restarted though.

That SuperUser entry led me to this excellent article from Raymond Chen of Microsoft that indicated how Windows locates the program that should receive the shortcut key. It will first cycle through all existing processes to see if its the owner of the shortcut key. The problem is that some processes do not respond to the Windows message in a timely fashion, causing the delay. How inconsiderate.

Having learned about the cause of the problem the issue became a detective case — how to track down the inconsiderate processes. I started with the suggestions in the SuperUser post with some success but the problem kept reappearing. I don't have time to pore through my Task Manager processes a couple of times a day to track down every ill-behaved program. I needed some way to identify the offending processes quickly.

Being a fan of the "quick-and-dirty" utility I fired up Visual Studio (using a shortcut key) and put together a quick console application that will enumerate the top-level Windows processes and query each for the hot key. The code is shown below.


This is a pretty brute-force approach to the problem but appears to work. I coded it to query the windows in a multi-threaded fashion but in practice that probably wasn't necessary. The response time to the SendMessageTimeout API call is in very small fractions of seconds, frequently sub-millisecond, so the multi-threading probably adds unnecessary overhead. I'm not to keen to refactor it because it works fast enough for me at the moment.

The process calls the EnumWindows API method to loop through all the top level windows that are currently running. Some of these are low-level processes that are not important for our purposes but I didn't want to exclude anything that might be relevant. The EnumWindows method takes as a parameter a callback method that will be executed once for each window located. This is handled using a defined delegate signature that is used with inline method invocation. Within that method I start a thread for a process that will discover the window's associated name and hot key.

The QueryHotKey method will first query the window for its name. The name may not be accessible for a variety of reasons, the most likely of which is accessibility. If not available I just list it as such rather than go to great lengths to determine the process information. In most cases the cause of the shortcut key delay will be something more mundane for which the name is available.

The heart of the utility is the call to SendMessageTimeout API method. This will send a Windows message to the window to query their shortcut key, with the stipulated timeout in milliseconds. My research led me to believe that Windows 8/10 waits three seconds before moving on so I used the same value.

Once the windows list has been exhausted I wait for the threads to complete, then output the results to the console. In practice I ran this from a command line and used good old command line output redirection to spool the results to a file, which I can view in Notepad or any other text editor to find the culprit(s).

Do I Really Need Calc To Stay Resident?

So I ran my code, which worked pretty much correctly on the first pass, a situation definitely unusual for me. I needed some minor additions to the results and got what I needed.

What it uncovered was pretty interesting. The two processes that were causing the problem for me were newer Windows apps, that is the Windows Calculator and the Windows Settings app. I had opened both of them earlier and closed them but they remain resident in a hibernated state that does not respond to some messages, apparently. I will admit that I'm no expert on the structure of these newer apps and their "tombstoned" state. I'll leave that commentary to those more knowledgeable than me.

Once I killed the tasks I tried a shortcut key and sure enough my text editor sprang immediately to life. Hooray! However, the problem reoccurs when I start one of the apps that remains a background task. If you check out Andy Geisler's reply to the SuperUser question he lists some tips for prohibiting this behavior, and this page has a step-by-step tutorial on how to disable various Windows background apps. For me, disabling Store and Settings appears to have been the most effective.

Thursday, April 9, 2015

Sending Mail Via SMTP Over Implicit SSL in .Net

My Kingdom for Port 25

I recently ran across a situation where I wanted my home workstation to send emails on a periodic basis. No problem, I'm a developer, so I'll just whip up a quick .Net console application and set it to run as a scheduled task when I want to transmit the necessary information.

All was well and good until I checked my log files and found quite a few SMTP send errors. How strange. I use log4net for logging in most of my work so I tailed the log file using the excellent LogView Plus utility. From that I was able to determine that the send utility only failed when my workstation was connected to my client's VPN. And so the plot thickened.

Like most of you readers my ISP, a major carrier in the northeastern United States, blocks traffic on port 25 as an anti-spam method. The only SMTP server that can be reached is the ISP's own server and only if you're connected to their network. However, when connected to my client's VPN the SMTP traffic was being routed over their network and my ISP's mail server would reject the connection since it looks like an unauthorized relay attempt.

My domain has a mail server that I can use but I can't reach it via the normal port 25 access because of the aforementioned ISP blocking. However the server can be reached via port 465 if transmitting over SSL. This port is no longer specifically for SMTPS but it works with my mail server so if I adjust a few settings in my code I should be fine. How typically naive of me.

Explicit Implications

So I modified my code that connects to the SMTP server to send to look like this:

When execution got to the Send method there was a long wait, then an exception. I wasn't connected to the VPN but clearly something had timed out. The error indicated a failure sending mail (duh!) with the additional inner exception "Unable to read data from the transport connection: net_io_connectionclosed" (huh?!?).

After much head scratching I was able to determine that the mail server, when monitoring port 465, is expecting all traffic over that port to be encrypted using SSL. That means the certificate negotiation happens even before the first SMTP HELO command. Further research indicated that the SmtpClient class does not support this type of transport level security, called Implicit SSL. So what's the point of the SmtpClient EnableSsl property? It turns out that uses a different certificate negotiation procedure. The client connects via an insecure port, specifically the dreaded port 25, and issues a STARTTLS command. The client and server will then negotiate the secure transmission after the explicit request, hence the name Explicit SSL. But this didn't help my situation because port 25 is being blocked by my ISP. Grrrrrr! Even if I tried to use Explicit SSL it wouldn't work because I can't reach my mail server over that port.

Tunnel My Way To Freedom

So I needed some way to send via SMTP over a secure connection that was negotiated before the initial SMTP handshake. I checked various Open Source libraries but they all support only Explicit SSL. I really didn't want to invest a lot of time rolling my own SMTPS implementation (although it would be an interesting project), especially since this was supposed to be a quickie utility.

What I finally located was a utility called Stunnel. It's essentially a secure transport connection between two endpoints. You can use it on a client to redirect traffic to a port to another server/port combination.

DISCLAIMER: Stunnel uses portions of the OpenSSL library, which recently had a high-profile exploit published in all major tech news media. I believe the latest version uses the patched OpenSSL but please use at your own risk.

Once the utility is installed, you use the "stunnel Service Install" entry on the Start Menu to set it up as a service. Before starting the service you need to make a modification to the "stunnel.conf" configuration file. The entry for my particular situation looked like this:

; ************** Example SSL client mode services

[my-smtps]
client = yes
accept = 127.0.0.1:465
connect = mymailserver.com:465

This tells stunnel to accept traffic locally on port 465 and reroute it over a secure channel to my public mail server, A slight change to my code:


...was all I needed to make this work.

The upside of all this was I learned a good deal about SMTP. The downside is that this "quickie" utility took a lot longer than I expected, proving once again that there is no such thing as a "quickie" utility.

Monday, October 13, 2014

Processing SQL Server FILESTREAM Data, Part 4 - Readin' and Writin'


In the prior installments in this series I covered some background, FILESTREAM setup, and the file and table creation for this project. In this final installment we'll finally see some C# code that I used to read and write the FILESTREAM data.

The Three "R"s

I was always confused by the irony that only one of the legendary Three "R"'s actually starts with an "R". Yet another indictment of American education? But I digress.
Before we work on the code to read FILESTREAM data, let's write to it first. First, we'll need a couple of structures to store information returned from various database operations.

Then we can create a routine that mimics an SMTP send, but instead stores the email information to the database tables we created in "Processing SQL Server FILESTREAM Data, Part 3 - Creating Tables". Pardon the formatting in order to make the overlong lines fit within the blog template.

A couple of notes about the code shown above:
  • The code uses Marc Gravell and Sam Saffron's superb Micro-ORM Dapper which I highly recommend. While religious wars rage over the use of Micro-ORMs vs heavy ORMs I far prefer Dapper to other approaches;
  • The INSERT statements use the SQL Server OUTPUT clause to return ID information about the inserted rows, which is a more efficient method than sending a subsequent SELECT query for the information;
  • Once the streams have been opened, the .Net 4.0 CopyTo method will do a nice job of copying the bytes. If you're on an earlier version of the framework this method can easily be created. See Jon Skeet's sample implementation here.

Once the email message has been inserted into the master table and we have its ID we can then attempt to insert the attachments into their corresponding detail table. This is done in two steps:
  1. Insert the metadata about the attachment to the EmailAttachments table. Once this is complete you can retrieve a file name and context ID for streaming attachment data to the FILESTREAM;
  2. Open the FILESTREAM using provided framework methods for doing so. Write the attachment data to the FILESTREAM;

Seems simple, but there is a subtlety. The INSERT statement to add the metadata must add at least one byte of data to the file using Transact-SQL. That is indicated by the null byte ("0x00") that is the last value of the statement. If you don't supply this, instead supplying NULL or, as I initially attempted, default, SQL Server will not create a file since you haven't given it any data. Consequently the SQL Server PathName() function will return NULL and the call to open the SqlFileStream will fail unceremoniously.
There are two ways that I could have submitted the attachment data to SQL Server, as the last value of the INSERT statement to the EmailAttachments table, or using streaming as I did in the example. I chose the latter so that, in the case of very large attachment, I could stream the file in chunks rather than reading the entire file into memory to submit via INSERT statement. This is less resource intensive under the heavy load I expect for this utility.
I then created a separate Windows service to read the messages, attempt to send via SMTP, log successes and failures, and queue for retrying a certain number of times. The heart of the portion that reads the attachments looks quite similar to the write operation

Some notes about the code shown above:
  • I created a result class, shown earlier in this post, for retaining the file path and transaction context returned from the query;
  • Note that you must create a transaction for the SELECT in order for the GET_FILESTREAM_TRANSACTION_CONTEXT method to return a context that can be used in the SqlFileStream constructor;
  • Once again I have used the CopyTo method to move the bytes between the streams.

Summary

That finishes the heart of the SQL Server FILESTREAM operations for the utility I was constructing. The real trick of it was the initial configuration and understanding the process. Hopefully this series of articles will help someone past the problems I encountered. Good luck and good coding!

Wednesday, September 24, 2014

Processing SQL Server FILESTREAM Data, Part 3 - Creating Tables

In Parts 1 and 2 of this series I discussed my experience with the SQL Server FILESTREAM technology, specifically the background of the decision and setup of the SQL Server. In this installment I discuss the tables created and how I specified the FILESTREAM BLOB column.

Setting The Table

So after some struggle I had SQL Server ready to handle FILESTREAMS. What I needed now were the requisite tables to store the data. This is achieved by adding a column to a table and indicating that BLOB data will live there in a file that is stored on a FILESTREAM filegroup. Here are the tables I used for my email and attachments log:

Most of the columns in the EmailMessages table are fairly self-explanatory. The TransmitStatusId column is a reference into a simple lookup table with an integer ID and description that indicates what state the message is in, e.g. Queued, Transmitted, Failed, etc. As you can see in the EmailAttachments table there are two columns that are somewhat out of the ordinary, the AttachmentFileId and FileData columns. But I'll explain each column so you can understand my approach to this design.
  • EmailAttachmentId - Monotonically increasing surrogate value to be used as a primary key. I prefer these to a GUID when a natural key is not handy but if you want to have a religious war about it there are plenty of places where the battle rages. Feel free to take it there;
  • EmailMessageId - Parent key to the EmailMessages table;
  • AttachmentFileId - This is a unique GUID identifier for the row, as signified by the ROWGUIDCOL indicator, necessary for the FILESTREAM feature to uniquely identify the data;
  • SequenceNum - Indicates the listing sequence of the attachment, for later reporting purposes;
  • Filename - Saves the original file name, since FILESTREAM will create generated file names, and I will want to recreate the file names later when I'm actually transmitting the file via SMTP;
  • FileData - The binary column where the file data is stored, although the data is read and written on the operating system file storage not the SQL Server data file.
  • timestamp - Yes, I still use timestamp files for concurrency. I'm an old-school kind of guy.
The last part of the CREATE TABLE statement for the EmailAttachments table is where you specify the filegroup on which the FILESTREAM data will be stored. This references the filegroup we created in Processing SQL Server FILESTREAM Data, Part 2 - The Setup. And with that, we're finally ready to start coding!
Next up - Processing SQL Server FILESTREAM Data, Part 4 - Readin' and Writin'

Monday, September 22, 2014

Processing SQL Server FILESTREAM Data, Part 2 - The Setup

In Part 1 of this topic I discussed the reasoning behind the decision to use Microsoft's FILESTREAM technology for a recent client project. In this installment I discuss the setup portion of this on the SQL Server side. I'll spare you much of the swing-and-a-miss frustration while attempting to understand how the parts work, but I'll try to pinpoint the traps that I located the hard way.

Stream of Consciousness

The first step is to insure that SQL Server's FILESTREAM technology is enabled for the instance in which you're working. This isn't too difficult to configure but there is a portion of it that might be confusing.
In SQL Server Configuration Manager you will be presented with a list of SQL Server services that have been installed. Double click the SQL Server (MSSQLSERVER) service to see its configuration. The third tab in that dialog is the FILESTREAM configuration (see Image 1). The selections on this page require some explanation:
  1. The "Enable FILESTREAM for Transact-SQL Access" seems pretty simple. This option is necessary for any FILESTREAM access. But what's subtle here is what it omits, which is the next portion;
  2. The "Enable FILESTREAM for file I/O streaming access" is the portion that will allow you as a developer to read and write FILESTREAM data as if it were any other .Net Stream. I recommend enabling this since it allows some nifty capabilities that will be seen in the code for a subsequent post;
  3. The "Windows share name" was another option that seemed obvious but was more subtle. This essentially creates a pseudo-share, like any other network share, that contains files that can be read and written. But it won't show up in Windows Explorer. It's only accessible via the SqlFileStream .Net Framework class;
  4. The final option, "Allow remote clients to have streaming access to FILESTREAM data" is still a bit of a mystery to me. Why would you enable the access without allowing remote clients to stream to it? Is it likely that only local clients would use it? It doesn't seem so to me but perhaps I'm mistaken.

Image 1 - FILESTREAM Configuration

Instance Kharma

Next we need to ensure that our database instance is enabled to utilize FILESTREAM capabilities. This can be done from SQL Server Management Studio. Right click on the database instance and choose Properties from the resulting menu. The Advanced configuration selection in that dialog has a dropdown list for FILESTREAM support right at the very top (see Image 2). It's uncertain to me whether this step is necessary or not because I didn't necessarily do this in the prescribed order but it seemed to me that it needed to be done. I chose the "Full access enabled" option in order to employ the remote streaming access that will be shown in a subsequent post.

Image 2 - FILESTREAM Instance Configuration

Filegroup Therapy

Since FILESTREAM BLOB data is stored on the file system it can't live inside the PRIMARY filegroup for a database. So we need to create a new filegroup and file to contain this data. This is done pretty simply with a few SQL statements, or so it would seem.
First the filegroup.

This is very simple and straightforward. It creates a logical filegroup that specifies that the files contained within will be where FILESTREAM BLOB data is stored.

Pernicious Permissions

Now that I had a filegroup I needed to add files to it. This is where things went a little sideways.
The SQL code to add a file to a filegroup is not terribly complicated.

Upon execution of this piece of code I was presented with an the following noxious error
Operating system error 0x80070005(Access is denied.) occurred while creating or opening file 'E:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\FilestreamExampleFiles'. Diagnose and correct the operating system error, and retry the operation.
As I investigated this issue I began to understand what was happening. SQL Server was attempting to create a folder on disk with the name I specified in the ALTER DATABASE command, which is where it would store the files that would comprise the BLOB data. But there was clearly a permissions issue creating the folder.
Well, I'm a developer not an IT technician but I know enough to solve this issue. But I was unable to do so in a satisfactory way. The SQL Server service was running under the NetworkService account, which seemed appropriate for the situation. That account had full control to the entire SQL Server folder tree and everything beneath it. But no matter what I did the problem persisted. I finally changed the service account to LocalSystem and the problem disappeared but I'm uncomfortable with that answer. If I set the permissions for the NetworkService user why was it unable to write to a local disk resource?
Up Next - Processing SQL Server FILESTREAM Data, Part 3 - Creating Tables