tag:blogger.com,1999:blog-14578169016126357322024-03-13T13:37:34.379-07:00Bob McGowan's BlogThoughts on software development and other random musingsBob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-1457816901612635732.post-10729060106117154272017-10-01T12:34:00.000-07:002017-10-01T12:34:25.205-07:00Ready, Fire, Aim!<style>div.para { margin-bottom: 7px; }</style>
<h1>Debugging Isn't Guesswork</h1>
<div class="para">I had a recent encounter with a team member that reignited a continuing source of aggravation for me when dealing with less assiduous developers. We were pair-programming, attempting to debug some VBA code that the other person had written. I was at the keyboard with my counterpart looking over my shoulder, offering advice.</div>
<div class="para">"You know what it might be, it might be <insert random code location here>."</div>
<div class="para">"Oh wait, I know what it is. It's <insert random code location here>."</div>
<div class="para">"Hmmm. Take a look at <insert random code location here>."</div>
<div class="para">Let's set aside the fact that it was incredibly frustrating to jump from spot to spot in code that I'm unfamiliar with, while simultaneously trying to understand the overall logic, but none of the suggestions panned out. Which led me to growl (at a less than acceptable volume),</div>
<div class="para">"Let's stop guessing and do the damned analysis." (I'm not always known for my tact.)</div>
<div class="para">To me, that's what debugging is - analysis. But I see far too many developers, including some of the most experienced, that view debugging as total guesswork. It's the coding equivalent of throwing dung against a wall to see what sticks. And here's the real danger with that approach. Sometimes, your random guess <i>appears</i> to have corrected the problem, but actually hasn't. Because you didn't analyze and determine the exact cause, you may have just masked the original defect with another defect.</div>
<div class="para">Please stop debugging via guesswork. Please.</div>
<h2>Aim, Then Fire</h2>
<div class="para">My recommendation is to Identify-Locate-Kill:</div>
<div class="para"><ol>
<li>Identify and focus on what you're trying to achieve. This sounds straightforward, but we often lose sight of the goal. While searching for an off-by-one defect, have you found several other unrelated issues and then gotten sidetracked correcting them? Stay focused on the original intent. Make note of the other issues and come back to them later. Sometimes, an informed guess by someone that knows the code can lead you to the general area, where you can begin your analysis, but don't just blindly assume that this is definitively the location of the defect.</li>
<li>Use analysis to locate the <i>precise location</i> of the defect. This is the crucial step. Don't skip past performing a step-by-step trace of the code to watch the values of variables and check the order of code execution. Modern debugging tools make this far simpler than when I first started coding, way back in the mists of time. <a href="https://en.wikipedia.org/wiki/Heisenbug" target="_blank">However, sometimes the act of attempting to observe an issue makes the issue difficult to duplicate</a>. In these situations don't forget the power of simple logging statements in code - sometimes it's the only way to locate a gnarly defect.</li>
<li>Kill the bug with a well considered correction. This can take several forms. Sometimes it's a deletion of code, rather than an addition. Sometimes it a re-ordering of a few lines. And in other situations, you need a complete code refactor. Discuss the approach with other developers that you trust, preferably ones that know the codebase, to ensure that your correction doesn't have repercussions. Test your modifications by hand to ensure minimum viable correct operation.</li>
</ol></div>
<div class="para">Not indicated here is the need for overall application testing to ensure no regressions have been introduced. However, I don't consider that part of debugging - unit tests should be a regular, repeating step in your development cycle.</div>
<h2>I <i>Can</i> Get Some...Satisfaction</h2>
<div class="para">A lot of developers groan at the thought of debugging, especially correcting someone else's code. While I also prefer the pleasure of greenfield development, debugging is unavoidable. And when I know I've located and positively corrected a code defect, I experience a profound sense of satisfaction. Part of that is because debugging is <i>hard</i>. It can be considerably more difficult than coding a new application or feature, especially if you're trying to decipher some other developer's thought process - or lack thereof.</div>
<div class="para">So please, analyze, don't guess.</div>
Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com0tag:blogger.com,1999:blog-1457816901612635732.post-52293704865096232382016-01-06T17:39:00.000-08:002020-05-01T14:02:01.055-07:00The Sluggish Three Finger Salute Returns!<h3>
Not So Super(fetch)
</h3>
In my <a href="http://blog.ramsoftsolutions.com/2015/12/find-slow-windows-shortcuts.html">previous post</a> I discussed my aggravation with a modification Microsoft has made in Windows 8 and 10 that causes some Metro apps to become non-responsive to certain Windows messages, specifically the <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms646278(v=vs.85).aspx">WM_GETHOTKEY</a> message that queries apps for their operating system shortcut key. My post provided some code that would help identify which applications were non-responsive so that you could kill their process and set their configuration to prohibit background loading. Initially this worked well for me — I identified the processes, shut them down, and my hotkeys worked properly again. All was well.<br />
<br />
Except it didn't last. I leave my workstation powered up for evening backups and when I hit my trusty Ctrl-Alt-U combination to start <a href="http://www.ultraedit.com/">Ultraedit</a> the next morning the shortcut key was back to a three second delay! Grrrrrrr! I fired up my utility and was surprised to find that Calculator, which I knew I had killed the evening before, was back again. Huh? How did that happen?<br />
<br />
So once again I put on my <a href="https://en.wikipedia.org/wiki/List_of_The_Hardy_Boys_characters#Fenton_Hardy">Fenton Hardy</a> hat (yes, I'm showing my age) and tried to find out how this was happening. What I determined was that the <a href="http://www.osnews.com/story/21471/SuperFetch_How_it_Works_Myths">Windows Superfetch</a> service, which is essentially a disk caching mechanism, was pre-loading certain applications that it feels are frequently used. It loads them into RAM and places them in a hibernated state. The applications take up RAM but I have a 16GB workstation so that's not an issue. The idea is that when I invoke the application it doesn't have to be loaded from disk again.<br />
<br />
In theory this is a good thing. Back in the bad old DOS days, and early versions of Windows, disk caching programs were a great way to speed up operations on frequently used programs. But in this day of SSD drives I'm not certain it's necessary. However, as I dove deeper it seemed that Superfetch is something that shouldn't be disabled without thought. But there doesn't seem to be a way to exclude certain apps from being preloaded. Each time I closed Calculator it would reappear after a period of time and my hotkeys got sluggish again. A pox on you Superfetch!<br />
<br />
So once again being enamored with a quick-and-dirty solution I decided to code a little system tray application that would periodically scan memory for the offending apps and, if they don't respond to the WM_GETHOTKEY message, kill them mercilessly. Granted, it's a clunky way to go about this but there seems to be no way to prohibit these applications from restarting. Microsoft, please make these hibernated apps respond to this message or let us exclude some applications from the Superfetch service. Please!<br />
<br />
So here's how this works. In Visual Studio create a new Windows Forms project. Then delete the default Form1 that is created by Visual Studio. This will cause a "not found" error in the following line of code in Program.cs:
<br />
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
Application.Run( new Form1() );
]]></script></pre>
You can safely delete this line of code. We won't be using it.<br />
<br />
We'll need a couple of new classes before we put new startup code in the Program.cs file. First, we need code to encapsulate the system tray support. Please note: I adapted this code from something I found on the Internet a while back but I didn't save the reference and I can't for the life of me remember where I found it. My apologies to the original author.<br />
<br />
First we'll need a class to encapsulate the popup context menu for the system tray icon, so we can exit the program.<br />
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
using System;
using System.Windows.Forms;
namespace RSS.KillMetroApps
{
class ContextMenus
{
public ContextMenuStrip Create()
{
// Add the default menu options.
ContextMenuStrip menu = new ContextMenuStrip();
ToolStripMenuItem item;
item = new ToolStripMenuItem();
item.Text = "Exit";
item.Click += new System.EventHandler( Exit_Click );
menu.Items.Add( item );
return menu;
}
void Exit_Click( object sender, EventArgs e )
{
Application.Exit();
}
}
}
]]></script></pre>
Then we need a class to handle the icon and tooltip display.<br />
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
using System;
using System.Diagnostics;
using System.Windows.Forms;
namespace RSS.KillMetroApps
{
class ProcessIcon : IDisposable
{
// The NotifyIcon is part of the System.Windows.Forms namespace.
// It specify what to show in the system tray
public NotifyIcon ni;
public ProcessIcon()
{
ni = new NotifyIcon();
}
public void Display()
{
// Put the icon in the system tray and allow it react to mouse clicks.
ni.MouseClick += new MouseEventHandler( ni_MouseClick );
// Make a Resources resx file and put an icon in it that you can reference here
ni.Icon = Resources.KillProcess;
ni.Text = "Kill Memory-Resident Tobstoned Metro Apps";
ni.Visible = true;
// Attach a context menu.
ni.ContextMenuStrip = new ContextMenus().Create();
}
public void Dispose()
{
// When the application closes, this will remove the icon from the system tray immediately.
ni.Dispose();
}
void ni_MouseClick( object sender, MouseEventArgs e )
{
// Handle mouse button clicks.
if( e.Button == MouseButtons.Left ) {
// Start Windows Explorer.
Process.Start( "explorer", null );
}
}
}
}
]]></script></pre>
In the Program.cs file, we initialize the system tray support then set a timer to periodically scan for offending applications. I created a custom configuration class to be able to add to the bad applications list, which is not shown in this post. (For the record, the apps that were unresponsive for me that kept being reloaded were Calculator, Settings, and Movies & TV). You can create your list however you see fit.<br />
<br />
A lot of the Windows API specific code is taken from my previous post, including the task spawning. I guess I could have recoded it to be less complex but it was easier to just drop it in from the other application.
<br />
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Diagnostics;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using log4net;
using log4net.Config;
namespace RSS.KillMetroApps
{
static class Program
{
private static KillAppConfiguration config;
private static ProcessIcon pi;
private static ILog logger;
private static int killedProcesses = 0;
// Various Windows API declarations
[Flags]
enum SendMessageTimeoutFlags : uint
{
SMTO_NORMAL = 0x0,
SMTO_BLOCK = 0x1,
SMTO_ABORTIFHUNG = 0x2,
SMTO_NOTIMEOUTIFNOTHUNG = 0x8,
SMTO_ERRORONEXIT = 0x20
}
const uint WM_GETHOTKEY = 0x0033;
[DllImport( "user32.dll", CharSet = CharSet.Unicode )]
private static extern int GetWindowText( IntPtr hWnd, StringBuilder strText, int maxCount );
[DllImport( "user32.dll", CharSet = CharSet.Unicode )]
private static extern int GetWindowTextLength( IntPtr hWnd );
[DllImport( "user32.dll" )]
private static extern bool EnumWindows( EnumWindowsProc enumProc, IntPtr lParam );
[DllImport( "user32.dll", SetLastError = true, CharSet = CharSet.Auto )]
private static extern IntPtr SendMessageTimeout(
IntPtr hWnd,
uint Msg,
UIntPtr wParam,
IntPtr lParam,
SendMessageTimeoutFlags fuFlags,
uint uTimeout,
out UIntPtr lpdwResult );
// function signature for callback method of EnumWindows API function
public delegate bool EnumWindowsProc( IntPtr hWnd, IntPtr lParam );
[DllImport( "user32.dll", SetLastError = true )]
static extern uint GetWindowThreadProcessId( IntPtr hWnd, out uint processId );
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
try {
Console.WriteLine( "Starting KillMetroApps" );
XmlConfigurator.Configure();
logger = LogManager.GetLogger( "RSS.KillMetroApps" );
logger.Debug( "Loading configuration." );
config = (KillAppConfiguration)ConfigurationManager.GetSection( "killApps" );
logger.Debug( "Setting .Net Winforms environment." );
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault( false );
logger.Debug( "Initializing system tray icon." );
// Show the system tray icon.
using( pi = new ProcessIcon() ) {
logger.DebugFormat(
"Setting up timer for process scans, interval = {0:#,##0} seconds.",
config.ScanIntervalSecs );
using( Timer scanTimer = new Timer() ) {
scanTimer.Interval = config.ScanIntervalSecs * 1000;
scanTimer.Tick += scanTimer_Tick;
scanTimer.Start();
logger.Debug( "Displaying system tray icon." );
pi.Display();
logger.Debug( "Beginning wait mode." );
// Make sure the application runs!
Application.Run();
logger.Debug( "Application exit started." );
scanTimer.Stop();
}
}
logger.Debug( "Application exiting cleanly." );
}
catch( Exception ex ) {
logger.Error( "Unexpected error: " + ex.Message, ex );
}
}
private static void scanTimer_Tick( object sender, EventArgs e )
{
Timer me = (Timer)sender;
me.Stop();
ScanApps();
me.Start();
}
private static void ScanApps()
{
try {
logger.Debug( "Starting app detecting via Windows enumeration." );
StringBuilder hoverMessage = new StringBuilder();
List<task> taskList = new List<task>();
// Call the Windows API for a list of all top-level windows currently active
EnumWindows( delegate( IntPtr wnd, IntPtr param )
{
// For each window, spawn a thread that will query each one for their hot key
// Keep track of the threads in a list in order to wait for all to complete
taskList.Add( Task.Factory.StartNew( () => QueryHotkey( wnd ) ) );
return true; // return all windows
}, IntPtr.Zero );
// Wait for all the tasks to complete.
// Since they're all supposed to timeout in three seconds set the max wait time to 5 seconds
logger.InfoFormat( "{0} tasks spawned. Waiting for all tasks to complete.", taskList.Count );
Task.WaitAll( taskList.ToArray(), 5000 );
pi.ni.Text = String.Format( "Kill Memory-Resident Tobstoned Metro Apps ({0:#,##0}}", killedProcesses );
logger.Info( "All tasks complete. Returning to our regularly scheduled program" );
}
catch( Exception ex ) {
logger.Error( "Unexpected error in ScannApps method - " + ex.Message, ex );
}
}
static void QueryHotkey( IntPtr hWnd )
{
try {
// First attempt to get the name from each window
// There are many instance where we will not receive a result,
// for a variety of reasons. In those case just shrug and move on for now
StringBuilder windowNameBuilder = null;
int size = GetWindowTextLength( hWnd );
if( size > 0 ) {
windowNameBuilder = new StringBuilder( size + 1 );
GetWindowText( hWnd, windowNameBuilder, windowNameBuilder.Capacity );
logger.InfoFormat( "Window name found {0}", windowNameBuilder.ToString() );
}
if( windowNameBuilder != null ) {
string windowName = windowNameBuilder.ToString();
foreach( KillApp app in config.Apps ) {
if( windowName == app.AppName ) {
logger.InfoFormat(
"App found with window name {0}, checking response to WM_GETHOTKEY",
windowName );
// Query the window for its hot key, with a 3 second timeout
// Keeping a timer just in case to see if any are slow, even
// if within the 3 second limit.
UIntPtr result = UIntPtr.Zero;
IntPtr retVal = SendMessageTimeout(
hWnd,
WM_GETHOTKEY,
(UIntPtr)0,
(IntPtr)0,
SendMessageTimeoutFlags.SMTO_ABORTIFHUNG,
(uint)( app.TimeoutSecs * 1000 ),
out result );
// Log the resulting message, being thread safe with the shared resource
if( retVal.ToInt32() == 0 ) {
logger.InfoFormat(
"App {0} response to WM_GETHOTKEY timed out, attempting to kill process",
windowName );
uint processId;
if( GetWindowThreadProcessId( hWnd, out processId ) != 0 ) {
logger.InfoFormat(
"App {0} process ID = {1}, sending kill request",
windowName, processId );
Process process = Process.GetProcessById( (int)processId );
process.Kill();
killedProcesses++;
}
}
else {
logger.InfoFormat(
"App {0} replied within tolerance levels",
windowName );
}
}
}
}
}
catch( Exception ex ) {
logger.Error( "Error querying hotkey", ex );
}
}
}
}
]]></script></pre>
While this approach is less than elegant the code appears to work nicely. I have the scan set for every three minutes, and it only runs for a few milliseconds per window, except for the offending apps which take a full three seconds. So there is a potential time window where an offending app may be resident when I press a hotkey. But in practice it hasn't happened yet so I appear to have alleviated 99.99% of the issue. Crude but effective.Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com0tag:blogger.com,1999:blog-1457816901612635732.post-57239220880074403732015-12-31T18:00:00.000-08:002020-05-01T14:10:35.535-07:00The Sluggish Three Finger Salute<h3>Not So Shortcuts
</h3>
Like many of you I recently upgraded both my portable and my primary workstation to Windows 10. Unlike some of you I did this willingly, having heard that this release of Windows was significantly improved from v8.1 (which I did not install) and because I do like to stay on top of the platform for my clients that will be using it.<br />
<br />
Shortly after installing the upgrade, and having gone through some painful changes and learning curves, I noticed that my Windows shortcut keys were responding very sluggishly. As a developer and keyboard enthusiast I have quite a number of shortcut keys defined at the OS level. For example Ctrl-Alt-U will pop up my trust copy of <a href="http://www.ultraedit.com">UltraEdit</a>, Ctrl-Alt-Q will fire up SQL Server Management Studio, Ctrl-Alt-X for Excel. You get the picture.<br />
<br />
I hadn't seen this sluggish shortcut behavior in Windows 7 and it was immensely frustrating. Almost all my applications are on SSD drives and I'm running a quad-core i7 CPU so they should be flying up on the screen! It had to be something OS-specific. So I employed a little Google-Fu to determine the cause of the problem.<br />
<br />
My research uncovered a fair amount about how the shortcut key system operates and how Window 8 and 10 Metro apps behave. This question and answer on <a href="http://superuser.com/questions/426947/slow-windows-desktop-keyboard-shortcuts">SuperUser</a> gives a concise synopsis of the root problem, which is that some Windows processes don't behave well, especially newer "Metro" apps. When closing these apps they remain memory-resident and eventually become "tombstoned", which apparently means they are taking up memory but not responding to many Windows messages. I assume they respond to a request to be restarted though.<br />
<br />
That SuperUser entry led me to this excellent article from <a href="https://blogs.msdn.microsoft.com/oldnewthing/20120502-00/?p=7723/">Raymond Chen</a> of Microsoft that indicated how Windows locates the program that should receive the shortcut key. It will first cycle through all existing processes to see if its the owner of the shortcut key. The problem is that some processes do not respond to the Windows message in a timely fashion, causing the delay. How inconsiderate.<br />
<br />
Having learned about the cause of the problem the issue became a detective case — how to track down the inconsiderate processes. I started with the suggestions in the SuperUser post with some success but the problem kept reappearing. I don't have time to pore through my Task Manager processes a couple of times a day to track down every ill-behaved program. I needed some way to identify the offending processes quickly.<br />
<br />
Being a fan of the "quick-and-dirty" utility I fired up Visual Studio (using a shortcut key) and put together a quick console application that will enumerate the top-level Windows processes and query each for the hot key. The code is shown below.<br />
<br />
<pre lang="cs; highlight: [61]" ><script class="brush: c-sharp" type="syntaxhighlighter">
<![CDATA[
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading.Tasks;
namespace RSS.FindSlowHotkey
{
class Program
{
// Output results buffer for bulk write to console,
// plus lock object for thread-safe add to buffer
static List<string> resultMessages = new List<string>();
static object lockObject = new object();
// Various Windows API declarations
[Flags]
enum SendMessageTimeoutFlags : uint
{
SMTO_NORMAL = 0x0,
SMTO_BLOCK = 0x1,
SMTO_ABORTIFHUNG = 0x2,
SMTO_NOTIMEOUTIFNOTHUNG = 0x8,
SMTO_ERRORONEXIT = 0x20
}
const uint WM_GETHOTKEY = 0x0033;
[DllImport( "user32.dll", CharSet = CharSet.Unicode )]
private static extern int GetWindowText( IntPtr hWnd,
StringBuilder strText,
int maxCount );
[DllImport( "user32.dll", CharSet = CharSet.Unicode )]
private static extern int GetWindowTextLength( IntPtr hWnd );
[DllImport( "user32.dll" )]
private static extern bool EnumWindows( EnumWindowsProc enumProc,
IntPtr lParam );
[DllImport( "user32.dll", SetLastError = true, CharSet = CharSet.Auto )]
private static extern IntPtr SendMessageTimeout(
IntPtr hWnd,
uint Msg,
UIntPtr wParam,
IntPtr lParam,
SendMessageTimeoutFlags fuFlags,
uint uTimeout,
out UIntPtr lpdwResult );
// function signature for callback method of EnumWindows API function
public delegate bool EnumWindowsProc( IntPtr hWnd, IntPtr lParam );
static void Main( string[] args )
{
List<Task> taskList = new List<Task>();
// Call the Windows API for a list of all
// top-level windows that are currently active
EnumWindows( delegate( IntPtr wnd, IntPtr param )
{
// For each window, spawn a thread that will query
// each one for their hot key. Keep track of the threads
// in a list in order to wait for all to complete
taskList.Add( Task.Factory.StartNew( () => QueryHotkey( wnd ) ) );
return true; // return all windows
}, IntPtr.Zero );
// Wait for all the tasks to complete.
// Since they're all supposed to timeout in
// three seconds set the max wait time to 5 seconds
Task.WaitAll( taskList.ToArray(), 5000 );
// Dump all the result messages to the console
foreach( string message in resultMessages ) {
Console.WriteLine( message );
}
}
static void QueryHotkey( IntPtr hWnd )
{
// First attempt to get the name from each window
// There are many instances where we will not
// receive a result, for a variety of reasons.
// In those cases just shrug and move on for now
StringBuilder message = new StringBuilder();
int size = GetWindowTextLength( hWnd );
if( size > 0 ) {
var builder = new StringBuilder( size + 1 );
GetWindowText( hWnd, builder, builder.Capacity );
message.AppendFormat( "Window [{0}], hotkey ", builder.ToString() );
}
else {
message.Append( "Unnamed window, hotkey " );
}
// Query the window for its hot key, with a 3 second timeout
// Keeping a timer just in case to see if any are slow, even
// if within the 3 second limit.
Stopwatch sw = new Stopwatch();
sw.Start();
UIntPtr result = UIntPtr.Zero;
IntPtr retVal = SendMessageTimeout(
hWnd,
WM_GETHOTKEY,
(UIntPtr)0,
(IntPtr)0,
SendMessageTimeoutFlags.SMTO_ABORTIFHUNG,
3000,
out result );
sw.Stop();
// Log the resulting message,
// being thread safe with the shared resource
if( retVal.ToInt32() == 0 ) {
message.Append( " timed out (jerk)" );
}
else {
message.AppendFormat(
@" replied {0:hh\:mm\:ss\.fffff}", sw.Elapsed );
}
lock( lockObject ) {
resultMessages.Add( message.ToString() );
}
}
}
}
]]></script></pre>
<br />
This is a pretty brute-force approach to the problem but appears to work. I coded it to query the windows in a multi-threaded fashion but in practice that probably wasn't necessary. The response time to the SendMessageTimeout API call is in very small fractions of seconds, frequently sub-millisecond, so the multi-threading probably adds unnecessary overhead. I'm not to keen to refactor it because it works fast enough for me at the moment.<br />
<br />
The process calls the <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms633497(v=vs.85).aspx"><b>EnumWindows</b></a> API method to loop through all the top level windows that are currently running. Some of these are low-level processes that are not important for our purposes but I didn't want to exclude anything that might be relevant. The <b>EnumWindows</b> method takes as a parameter a callback method that will be executed once for each window located. This is handled using a defined delegate signature that is used with inline method invocation. Within that method I start a thread for a process that will discover the window's associated name and hot key.<br />
<br />
The <b>QueryHotKey</b> method will first query the window for its name. The name may not be accessible for a variety of reasons, the most likely of which is accessibility. If not available I just list it as such rather than go to great lengths to determine the process information. In most cases the cause of the shortcut key delay will be something more mundane for which the name is available.<br />
<br />
The heart of the utility is the call to <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms644952(v=vs.85).aspx"><b>SendMessageTimeout</b></a> API method. This will send a Windows message to the window to query their shortcut key, with the stipulated timeout in milliseconds. My research led me to believe that Windows 8/10 waits three seconds before moving on so I used the same value.<br />
<br />
Once the windows list has been exhausted I wait for the threads to complete, then output the results to the console. In practice I ran this from a command line and used good old <a href="https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/redirection.mspx?mfr=true">command line output redirection</a> to spool the results to a file, which I can view in Notepad or any other text editor to find the culprit(s).
<pre lang="cs"><script class="brush: powershell" type="syntaxhighlighter"><![CDATA[
RSS.FindSlowHotkey.exe > results.txt
]]></script></pre>
<h3>Do I Really Need Calc To Stay Resident?</h3>
So I ran my code, which worked pretty much correctly on the first pass, a situation definitely unusual for me. I needed some minor additions to the results and got what I needed.<br />
<br />
What it uncovered was pretty interesting. The two processes that were causing the problem for me were newer Windows apps, that is the Windows Calculator and the Windows Settings app. I had opened both of them earlier and closed them but they remain resident in a hibernated state that does not respond to some messages, apparently. I will admit that I'm no expert on the structure of these newer apps and their "tombstoned" state. I'll leave that commentary to those more knowledgeable than me.<br />
<br />
Once I killed the tasks I tried a shortcut key and sure enough my text editor sprang immediately to life. Hooray! However, the problem reoccurs when I start one of the apps that remains a background task. If you check out <a href="http://superuser.com/a/957210/20106">Andy Geisler's</a> reply to the SuperUser question he lists some tips for prohibiting this behavior, and this <a href="http://windows.wonderhowto.com/how-to/everything-you-need-disable-windows-10-0163552/">page</a> has a step-by-step tutorial on how to disable various Windows background apps. For me, disabling Store and Settings appears to have been the most effective.
Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com7tag:blogger.com,1999:blog-1457816901612635732.post-79446224402730991602015-04-09T17:19:00.000-07:002015-12-31T18:25:18.830-08:00Sending Mail Via SMTP Over Implicit SSL in .Net<h3>My Kingdom for Port 25</h3>
I recently ran across a situation where I wanted my home workstation to send emails on a periodic basis. No problem, I'm a developer, so I'll just whip up a quick .Net console application and set it to run as a scheduled task when I want to transmit the necessary information.<br />
<br />
All was well and good until I checked my log files and found quite a few SMTP send errors. How strange. I use <a href="https://logging.apache.org/log4net/" target="_blank">log4net</a> for logging in most of my work so I <a href="http://en.wikipedia.org/wiki/Tail_%28Unix%29" target="_blank">tailed</a> the log file using the excellent <a href="http://www.logviewplus.com/" target="_blank">LogView Plus</a> utility. From that I was able to determine that the send utility only failed when my workstation was connected to my client's VPN. And so the plot thickened.<br />
<br />
Like most of you readers my ISP, a major carrier in the northeastern United States, blocks traffic on port 25 as an anti-spam method. The only SMTP server that can be reached is the ISP's own server and only if you're connected to their network. However, when connected to my client's VPN the SMTP traffic was being routed over their network and my ISP's mail server would reject the connection since it looks like an unauthorized relay attempt.<br />
<br />
My domain has a mail server that I can use but I can't reach it via the normal port 25 access because of the aforementioned ISP blocking. However the server can be reached via port 465 if transmitting over SSL. This port is no longer specifically for SMTPS but it works with my mail server so if I adjust a few settings in my code I should be fine. How typically naive of me.<br />
<br />
<h3>Explicit Implications</h3>
So I modified my code that connects to the SMTP server to send to look like this:<br/>
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
using System;
using System.Net;
using System.Net.Mail;
namespace RSS.SmtpTest
{
class Program
{
static void Main( string[] args )
{
try {
using( SmtpClient smtpClient = new SmtpClient( "mymailserver.com", 465 ) ) {
NetworkCredential creds = new NetworkCredential( "username", "password" );
smtpClient.Credentials = creds;
MailMessage msg = new MailMessage( "joe@schmoe.com", "jane@schmoe.com", "Test", "This is a test" );
smtpClient.Send( msg );
}
}
catch( Exception ex ) {
Console.WriteLine( ex.Message );
}
}
}
}
]]></script></pre>
<br />
When execution got to the Send method there was a long wait, then an exception. I wasn't connected to the VPN but clearly something had timed out. The error indicated a failure sending mail (duh!) with the additional inner exception "Unable to read data from the transport connection: net_io_connectionclosed" (huh?!?).<br />
<br />
After much head scratching I was able to determine that the mail server, when monitoring port 465, is expecting <i>all</i> traffic over that port to be encrypted using SSL. That means the certificate negotiation happens even before the first SMTP HELO command. Further research indicated that the SmtpClient class does not support this type of transport level security, called <b>Implicit SSL</b>. So what's the point of the <a href="https://msdn.microsoft.com/en-us/library/system.net.mail.smtpclient.enablessl%28v=vs.100%29.aspx" target="_blank">SmtpClient EnableSsl</a> property? It turns out that uses a different certificate negotiation procedure. The client connects via an insecure port, specifically the dreaded port 25, and issues a STARTTLS command. The client and server will then negotiate the secure transmission after the explicit request, hence the name <b>Explicit SSL</b>. But this didn't help my situation because <i>port 25 is being blocked by my ISP. Grrrrrr!</i> Even if I tried to use Explicit SSL it wouldn't work because I can't reach my mail server over that port.<br />
<br />
<h3>Tunnel My Way To Freedom</h3>
So I needed some way to send via SMTP over a secure connection that was negotiated <i>before</i> the initial SMTP handshake. I checked various Open Source libraries but they all support only Explicit SSL. I really didn't want to invest a lot of time rolling my own SMTPS implementation (although it would be an interesting project), especially since this was supposed to be a quickie utility.<br />
<br />
What I finally located was a utility called <a href="https://www.stunnel.org/index.html" target="_blank">Stunnel</a>. It's essentially a secure transport connection between two endpoints. You can use it on a client to redirect traffic to a port to another server/port combination.<br />
<br />
<blockquote style="background-color: #cccccc;">
DISCLAIMER: Stunnel uses portions of the OpenSSL library, which recently had a high-profile exploit published in all major tech news media. I believe the latest version uses the patched OpenSSL but please use at your own risk.
</blockquote>
<br />
Once the utility is installed, you use the "stunnel Service Install" entry on the Start Menu to set it up as a service. Before starting the service you need to make a modification to the "stunnel.conf" configuration file. The entry for my particular situation looked like this:<br />
<br />
<blockquote style="background-color: #cccccc;">
; ************** Example SSL client mode services<br />
<br />
[my-smtps]<br />
client = yes<br />
accept = 127.0.0.1:465<br />
connect = mymailserver.com:465<br />
</blockquote>
<br />
This tells stunnel to accept traffic locally on port 465 and reroute it over a secure channel to my public mail server, A slight change to my code:<br />
<br />
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
using System;
using System.Net;
using System.Net.Mail;
namespace RSS.SmtpTest
{
class Program
{
static void Main( string[] args )
{
try {
using( SmtpClient smtpClient = new SmtpClient( "localhost", 465 ) ) { // <-- Changed the server to local stunnel
NetworkCredential creds = new NetworkCredential( "username", "password" );
smtpClient.Credentials = creds;
MailMessage msg = new MailMessage( "joe@schmoe.com", "jane@schmoe.com", "Test", "This is a test" );
smtpClient.Send( msg );
}
}
catch( Exception ex ) {
Console.WriteLine( ex.Message );
}
}
}
}
]]></script></pre>
<br />
...was all I needed to make this work.<br />
<br />
The upside of all this was I learned a good deal about SMTP. The downside is that this "quickie" utility took a lot longer than I expected, proving once again that there is no such thing as a "quickie" utility.<br />
<br />Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com6tag:blogger.com,1999:blog-1457816901612635732.post-91500952933546197472014-10-13T17:23:00.000-07:002014-10-13T18:23:36.474-07:00Processing SQL Server FILESTREAM Data, Part 4 - Readin' and Writin'<style>div.para { margin-bottom: 7px; }</style>
<a href="http://www.codeproject.com/script/Articles/BlogFeedList.aspx?amid=7544" rel="tag" style="display: none;">CodeProject</a>
<br />
<div class="para">
In the prior installments in this series I covered <a href="http://blog.ramsoftsolutions.com/2014/09/processing-sql-server-filestream-data.html">some background</a>, <a href="http://blog.ramsoftsolutions.com/2014/09/processing-sql-server-filestream-data_22.html">FILESTREAM setup</a>, and the <a href="http://blog.ramsoftsolutions.com/2014/09/processing-sql-server-filestream-data_22.html">file and table creation</a> for this project. In this final installment we'll finally see some C# code that I used to read and write the FILESTREAM data.</div>
<h1>
The Three "R"s</h1>
<div class="para">
I was always confused by the irony that only one of the legendary <a href="http://en.wikipedia.org/wiki/The_three_Rs">Three "R"'s</a> actually starts with an "R". Yet another indictment of American education? But I digress.</div>
<div class="para">
Before we work on the code to read FILESTREAM data, let's write to it first. First, we'll need a couple of structures to store information returned from various database operations.</div>
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
public class InsertResult
{
public decimal Id { get; set; }
public byte[] timestamp { get; set; }
}
public class FilestreamInsertResult
{
public decimal Id { get; set; }
public byte[] timestamp { get; set; }
public string FilestreamPath { get; set; }
public byte[] FilestreamContext { get; set; }
}
public class FilestreamSelectResult
{
public string FilestreamPath { get; set; }
public byte[] FilestreamContext { get; set; }
}
]]></script>
</pre>
<div class="para">
Then we can create a routine that mimics an SMTP send, but instead stores the email information to the database tables we created in "Processing SQL Server FILESTREAM Data, Part 3 - Creating Tables". Pardon the formatting in order to make the overlong lines fit within the blog template.
</div>
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
public bool Send( string fromAddress, string fromAlias, string recipients,
string ccRecipients, string bccRecipients,
string subject, string body, string[] attachments )
{
try {
using( IDbConnection connection = new SqlConnection( connectionString ) ) {
connection.Open();
using( IDbTransaction trans = connection.BeginTransaction() ) {
try {
InsertResult insertId = connection.Query<insertresult>(
@"INSERT INTO Notification.EmailMessages(
TransmitStatusId, SubmitDate, TransmitDate,
AttemptCount, FromAddress, FromAlias, ToAddresses,
CcAddresses, BccAddresses, Subject, Body )
OUTPUT Inserted.EmailMessageId AS Id, Inserted.timestamp
VALUES( @TransmitStatusId, @SubmitDate, @TransmitDate,
@AttemptCount, @FromAddress, @FromAlias,
@ToAddresses, @CcAddresses, @BccAddresses,
@Subject, @Body )",
new
{
TransmitStatusId = Model.TransmitStatus.Queued,
SubmitDate = DateTime.Now,
TransmitDate = (System.Nullable<datetime>)null,
AttemptCount = 0,
FromAddress = fromAddress,
FromAlias = fromAlias,
ToAddresses = recipients,
CcAddresses = ccRecipients,
BccAddresses = bccRecipients,
Subject = subject,
Body = body
}, trans,
commandType: CommandType.Text ).FirstOrDefault();
if( attachments != null && attachments.Length > 0 ) {
for( int attachmentIdx = 0; attachmentIdx < attachments.Length; attachmentIdx++ ) {
FilestreamInsertResult filestreamId =
connection.Query<FilestreamInsertResult>(
@"INSERT INTO Notification.EmailAttachments( EmailMessageId,
AttachmentFileId, SequenceNum, Filename, FileData )
OUTPUT Inserted.EmailAttachmentId AS Id,
Inserted.timestamp,
Inserted.FileData.PathName() AS FilestreamPath,
GET_FILESTREAM_TRANSACTION_CONTEXT() AS FilestreamContext
VALUES( @EmailMessageId, NEWID(), @SequenceNum, @Filename, 0x00 )",
new
{
EmailMessageId = insertId.Id,
SequenceNum = attachmentIdx + 1,
Filename = Path.GetFileName( attachments[ attachmentIdx ] )
}, trans,
commandType: CommandType.Text ).FirstOrDefault();
const int BUFSIZ = 32768;
using( Stream sqlFilestream = new SqlFileStream(
filestreamId.FilestreamPath, filestreamId.FilestreamContext,
FileAccess.Write ) ) {
using( FileStream infileStream =
File.Open( attachments[ attachmentIdx ],
FileMode.Open, FileAccess.Read,
FileShare.None ) ) {
infileStream.CopyTo( sqlFilestream, BUFSIZ );
infileStream.Close();
}
sqlFilestream.Close();
}
}
}
trans.Commit();
}
catch {
trans.Rollback();
throw;
}
}
connection.Close();
return true;
}
}
catch( Exception ex ) {
logger.Error( "Error in Send() method", ex );
throw;
}
}
]]></script>
</pre>
<div class="para">
A couple of notes about the code shown above:<br />
<ul>
<li>The code uses Marc Gravell and Sam Saffron's superb Micro-ORM <a href="https://github.com/StackExchange/dapper-dot-net">Dapper</a> which I highly recommend. While religious wars rage over the use of Micro-ORMs vs heavy ORMs I far prefer Dapper to other approaches;</li>
<li>The INSERT statements use the SQL Server OUTPUT clause to return ID information about the inserted rows, which is a more efficient method than sending a subsequent SELECT query for the information;</li>
<li>Once the streams have been opened, the .Net 4.0 CopyTo method will do a nice job of copying the bytes. If you're on an earlier version of the framework this method can easily be created. See Jon Skeet's sample implementation <a href="http://stackoverflow.com/a/5730893/49954">here</a>.</li>
</ul>
<br />
Once the email message has been inserted into the master table and we have its ID we can then attempt to insert the attachments into their corresponding detail table. This is done in two steps:<br />
<ol>
<li>Insert the metadata about the attachment to the EmailAttachments table. Once this is complete you can retrieve a file name and context ID for streaming attachment data to the FILESTREAM;</li>
<li>Open the FILESTREAM using provided framework methods for doing so. Write the attachment data to the FILESTREAM;</li>
</ol>
<br />
Seems simple, but there is a subtlety. The INSERT statement to add the metadata <i>must add at least one byte of data to the file using Transact-SQL</i>. That is indicated by the null byte ("0x00") that is the last value of the statement. If you don't supply this, instead supplying NULL or, as I initially attempted, default, SQL Server will not create a file since you haven't given it any data. Consequently the SQL Server PathName() function will return NULL and the call to open the SqlFileStream will fail unceremoniously.
</div>
<div class="para">
There are two ways that I could have submitted the attachment data to SQL Server, as the last value of the INSERT statement to the EmailAttachments table, or using streaming as I did in the example. I chose the latter so that, in the case of very large attachment, I could stream the file in chunks rather than reading the entire file into memory to submit via INSERT statement. This is less resource intensive under the heavy load I expect for this utility.
</div>
<div class="para">I then created a separate Windows service to read the messages, attempt to send via SMTP, log successes and failures, and queue for retrying a certain number of times. The heart of the portion that reads the attachments looks quite similar to the write operation</div>
<pre lang="cs"><script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
public void GetAttachment( int attachmentId, string outfileName )
{
try {
using( IDbConnection connection = new SqlConnection( connectionString ) ) {
connection.Open();
using( IDbTransaction trans = connection.BeginTransaction() ) {
try {
FilestreamSelectResult fileInfo =
connection.Query<FilestreamSelectResult>(
@"SELECT FileData.PathName() AS FilestreamPath,
GET_FILESTREAM_TRANSACTION_CONTEXT() AS FilestreamContext
FROM Notification.EmailAttachments
WHERE EmailAttachmentId = @EmailAttachmentId",
new
{
EmailAttachmentId = attachmentId
},
transaction: trans,
commandType: CommandType.Text).FirstOrDefault();
const int BUFSIZ = 32767;
using(FileStream outfileStream = File.Open(
outfileName, FileMode.Create,
FileAccess.Write, FileShare.None)) {
using( Stream sqlFilestream =
new SqlFileStream( fileInfo.FilestreamPath,
fileInfo.FilestreamContext, FileAccess.Read ) ) {
sqlFilestream.CopyTo( outfileStream, BUFSIZ )
sqlFilestream.Close();
}
outfileStream.Close();
}
connection.Close();
}
catch {
// Log error here
throw;
}
}
connection.Close();
}
}
catch( Exception ex ) {
logger.Error( "Error in GetAttachment method.", ex );
throw;
}
}
]]></script>
</pre>
<div class="para">
Some notes about the code shown above:<br />
<ul>
<li>I created a result class, shown earlier in this post, for retaining the file path and transaction context returned from the query;</li>
<li>Note that you must create a transaction for the SELECT in order for the GET_FILESTREAM_TRANSACTION_CONTEXT method to return a context that can be used in the SqlFileStream constructor;</li>
<li>Once again I have used the CopyTo method to move the bytes between the streams.</li>
</ul>
</div>
<h1>Summary</h1>
<div class="para">
That finishes the heart of the SQL Server FILESTREAM operations for the utility I was constructing. The real trick of it was the initial configuration and understanding the process. Hopefully this series of articles will help someone past the problems I encountered. Good luck and good coding!</div>Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com4tag:blogger.com,1999:blog-1457816901612635732.post-12524187306502696372014-09-24T06:00:00.000-07:002014-09-24T06:28:51.992-07:00Processing SQL Server FILESTREAM Data, Part 3 - Creating Tables<style>
div.para {
margin-bottom: 7px;
}
</style>
<a href="http://www.codeproject.com/script/Articles/BlogFeedList.aspx?amid=7544" rel="tag" style="display: none;">CodeProject</a>
In <a href="http://blog.ramsoftsolutions.com/2014/09/processing-sql-server-filestream-data.html">Parts 1</a> <a href="http://blog.ramsoftsolutions.com/2014/09/processing-sql-server-filestream-data_22.html">and 2</a> of this series I discussed my experience with the SQL Server FILESTREAM technology, specifically the background of the decision and setup of the SQL Server. In this installment I discuss the tables created and how I specified the FILESTREAM BLOB column.
<br />
<h1>
Setting The Table</h1>
<div class="para">
So after some struggle I had SQL Server ready to handle FILESTREAMS. What I needed now were the requisite tables to store the data. This is achieved by adding a column to a table and indicating that BLOB data will live there in a file that is stored on a FILESTREAM filegroup. Here are the tables I used for my email and attachments log:</div>
<script class="brush: sql" type="syntaxhighlighter"><![CDATA[
CREATE TABLE Notification.EmailMessages(
EmailMessageId int IDENTITY NOT NULL,
TransmitStatusId int NOT NULL,
SubmitDate datetime NOT NULL,
TransmitDate datetime NULL,
AttemptCount int NOT NULL,
FromAddress varchar( 100 ) NOT NULL,
FromAlias varchar( 100 ) NULL,
ToAddresses varchar( 1000 ) NOT NULL,
CcAddresses varchar( 1000 ) NULL,
BccAddresses varchar( 1000 ) NULL,
Subject varchar( 1000 ) NULL,
Body text NULL,
timestamp timestamp NOT NULL,
CONSTRAINT PKEmailMessage
PRIMARY KEY( EmailMessageId ),
CONSTRAINT FK1EmailMessage
FOREIGN KEY( TransmitStatusId )
REFERENCES Notification.TransmitStatus( TransmitStatusId )
)
GO
CREATE TABLE Notification.EmailAttachments(
EmailAttachmentId int IDENTITY NOT NULL,
EmailMessageId int NOT NULL,
AttachmentFileId uniqueidentifier ROWGUIDCOL NOT NULL UNIQUE,
SequenceNum int NOT NULL,
Filename varchar( 1000 ) NOT NULL,
FileData varbinary( max ) FILESTREAM NULL,
timestamp timestamp NOT NULL,
CONSTRAINT PKEmailAttachments
PRIMARY KEY( EmailAttachmentId ),
CONSTRAINT FK1EmailAttachments
FOREIGN KEY( EmailMessageId )
REFERENCES Notification.EmailMessages( EmailMessageId )
) ON [PRIMARY] FILESTREAM_ON FilestreamExampleFilegroup
]]></script>
<br />
<div class="para">
Most of the columns in the EmailMessages table are fairly self-explanatory. The TransmitStatusId column is a reference into a simple lookup table with an integer ID and description that indicates what state the message is in, e.g. Queued, Transmitted, Failed, etc. As you can see in the EmailAttachments table there are two columns that are somewhat out of the ordinary, the AttachmentFileId and FileData columns. But I'll explain each column so you can understand my approach to this design.</div>
<div class="para">
<ul>
<li><b>EmailAttachmentId</b> - Monotonically increasing surrogate value to be used as a primary key. I prefer these to a GUID when a natural key is not handy but if you want to have a religious war about it there are plenty of places where the battle rages. Feel free to take it there;</li>
<li><b>EmailMessageId</b> - Parent key to the EmailMessages table;</li>
<li><b>AttachmentFileId</b> - This is a unique GUID identifier for the row, as signified by the ROWGUIDCOL indicator, necessary for the FILESTREAM feature to uniquely identify the data;</li>
<li><b>SequenceNum</b> - Indicates the listing sequence of the attachment, for later reporting purposes;</li>
<li><b>Filename</b> - Saves the original file name, since FILESTREAM will create generated file names, and I will want to recreate the file names later when I'm actually transmitting the file via SMTP;</li>
<li><b>FileData</b> - The binary column where the file data is stored, although the data is read and written on the operating system file storage not the SQL Server data file.</li>
<li><b>timestamp</b> - Yes, I still use timestamp files for concurrency. I'm an old-school kind of guy.</li>
</ul>
</div>
<div class="para">
The last part of the CREATE TABLE statement for the EmailAttachments table is where you specify the filegroup on which the FILESTREAM data will be stored. This references the filegroup we created in Processing SQL Server FILESTREAM Data, Part 2 - The Setup. And with that, we're finally ready to start coding!
</div>
<div class="para">
Next up - Processing SQL Server FILESTREAM Data, Part 4 - Readin' and Writin'</div>
Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com0tag:blogger.com,1999:blog-1457816901612635732.post-11551769521983830522014-09-22T06:00:00.000-07:002014-09-22T13:17:36.836-07:00Processing SQL Server FILESTREAM Data, Part 2 - The Setup<style>div.para { margin-bottom: 7px; }</style>
<a href='http://www.codeproject.com/script/Articles/BlogFeedList.aspx?amid=7544' rel='tag' style='display:none;'>CodeProject</a>
In <a href="http://blog.ramsoftsolutions.com/2014/09/processing-sql-server-filestream-data.html">Part 1</a> of this topic I discussed the reasoning behind the decision to use Microsoft's FILESTREAM technology for a recent client project. In this installment I discuss the setup portion of this on the SQL Server side. I'll spare you much of the swing-and-a-miss frustration while attempting to understand how the parts work, but I'll try to pinpoint the traps that I located the hard way.
<h1>Stream of Consciousness</h1>
<div class="para">The first step is to insure that SQL Server's FILESTREAM technology is enabled for the instance in which you're working. This isn't too difficult to configure but there is a portion of it that might be confusing.</div>
<div class="para">In SQL Server Configuration Manager you will be presented with a list of SQL Server services that have been installed. Double click the SQL Server (MSSQLSERVER) service to see its configuration. The third tab in that dialog is the FILESTREAM configuration (see Image 1). The selections on this page require some explanation:
<ol>
<li>The "Enable FILESTREAM for Transact-SQL Access" seems pretty simple. This option is necessary for any FILESTREAM access. But what's subtle here is what it omits, which is the next portion;</li>
<li>The "Enable FILESTREAM for file I/O streaming access" is the portion that will allow you as a developer to read and write FILESTREAM data as if it were any other <a href="http://msdn.microsoft.com/en-us/library/system.io.stream(v=vs.110).aspx">.Net Stream</a>. I recommend enabling this since it allows some nifty capabilities that will be seen in the code for a subsequent post;</li>
<li>The "Windows share name" was another option that seemed obvious but was more subtle. This essentially creates a pseudo-share, like any other network share, that contains files that can be read and written. But it won't show up in Windows Explorer. It's only accessible via the <a href="http://msdn.microsoft.com/en-us/library/system.data.sqltypes.sqlfilestream(v=vs.110).aspx">SqlFileStream</a> .Net Framework class;</li>
<li>The final option, "Allow remote clients to have streaming access to FILESTREAM data" is still a bit of a mystery to me. Why would you enable the access without allowing remote clients to stream to it? Is it likely that only local clients would use it? It doesn't seem so to me but perhaps I'm mistaken.</li>
</ol>
</div>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja_Yn_fkCy0przEkwUbJudqP5bE3EpKkdkVj1-kXWVDEVS_viU78hMghagrsSz_Neo6_0gSQigplDB16FhJFs7tCeXD_Z4ExkEdjfqyTJRzLWAVrWL5Zqo7OeZ2Bpy_RJ8e56HExb7qqml/s1600/Filestream+Server+Config.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja_Yn_fkCy0przEkwUbJudqP5bE3EpKkdkVj1-kXWVDEVS_viU78hMghagrsSz_Neo6_0gSQigplDB16FhJFs7tCeXD_Z4ExkEdjfqyTJRzLWAVrWL5Zqo7OeZ2Bpy_RJ8e56HExb7qqml/s320/Filestream+Server+Config.png" /></a>
<br/>
Image 1 - FILESTREAM Configuration</div>
<h1>Instance Kharma</h1>
<div class="para">Next we need to ensure that our database instance is enabled to utilize FILESTREAM capabilities. This can be done from SQL Server Management Studio. Right click on the database instance and choose Properties from the resulting menu. The Advanced configuration selection in that dialog has a dropdown list for FILESTREAM support right at the very top (see Image 2). It's uncertain to me whether this step is necessary or not because I didn't necessarily do this in the prescribed order but it seemed to me that it needed to be done. I chose the "Full access enabled" option in order to employ the remote streaming access that will be shown in a subsequent post.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2MDgbehQS2FGIG8tOYCc6b7EKAbPJbHXXh8-OLN9lpHF6dMg_W2ZGnpLMMx9J-MLJ9C7EKWtpQk3ONe0hykgF5U5P9jJMiL_0PzJ0wKJyadFAzJMXhGiZMCKi5Snzw2kAUDa5MY2r2_0s/s1600/Filestream+Instance+Config.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2MDgbehQS2FGIG8tOYCc6b7EKAbPJbHXXh8-OLN9lpHF6dMg_W2ZGnpLMMx9J-MLJ9C7EKWtpQk3ONe0hykgF5U5P9jJMiL_0PzJ0wKJyadFAzJMXhGiZMCKi5Snzw2kAUDa5MY2r2_0s/s320/Filestream+Instance+Config.png" /></a>
<br/>
Image 2 - FILESTREAM Instance Configuration</div>
</div>
<h1>Filegroup Therapy</h1>
<div class="para">Since FILESTREAM BLOB data is stored on the file system it can't live inside the PRIMARY filegroup for a database. So we need to create a new filegroup and file to contain this data. This is done pretty simply with a few SQL statements, or so it would seem.</div>
<div class="para">First the filegroup.</div>
<pre lang="sql">
<script class="brush: sql" type="syntaxhighlighter"><![CDATA[
ALTER DATABASE FilestreamExample
ADD FILEGROUP FilestreamExampleFilegroup
CONTAINS FILESTREAM
GO
]]></script>
</pre>
<div class="para">This is very simple and straightforward. It creates a logical filegroup that specifies that the files contained within will be where FILESTREAM BLOB data is stored.</div>
<h1>Pernicious Permissions</h1>
<div class="para">Now that I had a filegroup I needed to add files to it. This is where things went a little sideways.</div>
<div class="para">The SQL code to add a file to a filegroup is not terribly complicated.</div>
<pre lang="sql">
<script class="brush: sql" type="syntaxhighlighter"><![CDATA[
ALTER DATABASE FilestreamExample
ADD FILE( NAME = N'FilestreamExampleFiles',
FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\FilestreamExampleFiles' )
TO FILEGROUP FilestreamExampleFilegroup
GO
]]></script>
</pre>
<div class="para">Upon execution of this piece of code I was presented with an the following noxious error</div>
<blockquote>Operating system error 0x80070005(Access is denied.) occurred while creating or opening file 'E:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\FilestreamExampleFiles'. Diagnose and correct the operating system error, and retry the operation.</blockquote>
<div class="para">As I investigated this issue I began to understand what was happening. SQL Server was attempting to create a folder on disk with the name I specified in the ALTER DATABASE command, which is where it would store the files that would comprise the BLOB data. But there was clearly a permissions issue creating the folder.</div>
<div class="para">Well, I'm a developer not an IT technician but I know enough to solve this issue. But I was unable to do so in a satisfactory way. The SQL Server service was running under the NetworkService account, which seemed appropriate for the situation. That account had full control to the entire SQL Server folder tree and everything beneath it. But no matter what I did the problem persisted. I finally changed the service account to LocalSystem and the problem disappeared but I'm uncomfortable with that answer. If I set the permissions for the NetworkService user why was it unable to write to a local disk resource?</div>
<div class="para">Up Next - Processing SQL Server FILESTREAM Data, Part 3 - Creating Tables</div>
Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com2tag:blogger.com,1999:blog-1457816901612635732.post-48318443427196421702014-09-20T11:51:00.000-07:002014-09-22T06:08:16.797-07:00Processing SQL Server FILESTREAM Data, Part 1<style>
div.para {
margin-bottom: 7px;
}
</style>
<a href='http://www.codeproject.com/script/Articles/BlogFeedList.aspx?amid=7544' rel='tag' style='display:none;'>CodeProject</a>
<div class="para">I recently finished a utility for a client that was a perfect situation to gain some experience with a technology that I hadn't used before, <a href="http://technet.microsoft.com/en-us/library/bb933993(v=sql.105).aspx">SQL Server's FILESTREAM</a> capability. This post and subsequent entries will discuss my travails with this technology, but let's set up a little backstory first. (Cue wavy flashback effect)</div>
<h1>Of Telephone Books And Happiness</h1>
<div class="para">In the early 2000's I co-founded a startup that offered IT services to the Yellow Pages advertising industry. The reasons why and how I ended up in the Yellow Pages industry form a strange and wondrous tale full of action and danger that is best left for another post, or over a lot of drinks. However, I thoroughly enjoyed being an entrepreneur despite the hours, effort, and challenges. And one of the challenges I had to overcome had to do with pages - lots and lots of pages.</div>
<div class="para">As part of our services we offered what are known as electronic <a href="http://en.wikipedia.org/wiki/Tear_sheet">tear sheets</a>, i.e. electronic copies of the page on which an actual advertisement was placed. So we had to carry all the pages from every book supplied by every Yellow Page publisher. Some of these were provided as individual PDF files and some were not provided at all. For the latter we took the physical book, sliced off the binding, and scanned each individual page which was then OCR'ed for headings and indexed into a SQL Server database. In either form, with so many publishers and pages, we ended up with millions of individual page files.</div>
<div class="para">As I noted in the previous paragraph each of these page files was indexed in a series of database tables but we needed access to the page image file without the overhead of having to retrieve and store said data into a SQL Server BLOB. Therefore, the page image files were stored on an NTFS file system on fast RAID storage. And everything worked very well, except for one thing - when the files are stored on the file system and not in SQL Server there is no relational integrity between the two data stores. Delete a row from the index table and you have an orphan file. Delete a file from a folder and you have an orphan index record. Maintaining as much integrity as we were able was a constant work-in-progress, with nightly pruning processes, validation routines, and reports. Very ugly but we made it work.</div>
<div class="para">In the release of SQL Server 2008 Microsoft included support for FILESTREAM BLOBs, that is binary large objects that were stored on the file system instead of within a SQL Server MDF file. The BLOB data is part of a row in a database table but essentially it becomes a reference to an individual file on the file system. The big advantage is that SQL Server maintains relational integrity between the table row and the data file. This wonder arrived too late for me, since the startup folded in 2011, but I recently discovered I could make use of it on a project for a current client.</div>
<h1>Email Logging For Fun And Profit</h1>
<div class="para">My client has myriad nightly processes and constantly running services that send notification emails to relevant parties. Their mail server, however, is outsourced and there have been occasions where the processes were unable to send the notification emails because the server or Internet access was unavailable. So they were looking for a solution that would ensure delivery of their email notifications. My first inclination was to use <a href="http://msdn.microsoft.com/en-us/library/ms711472(v=vs.85).aspx">MSMQ</a> since it's tailor made for guaranteed message delivery. But after further discussion with my client I discovered they additionally wanted to be able to log the messages for proof of delivery and frequency reporting so I started to lean towards a more database-centric solution. I've done this before - most email message information can be stored in a single table row.</div>
<div class="para">Unless there are attachments.</div>
<div class="para">A single email message can have zero to many file attachments, a traditional one to many <a href="http://en.wikipedia.org/wiki/Cardinality_(data_modeling)">cardinality</a>. I toyed with the idea of storing the files in a BLOB column but based on my prior experience I wasn't thrilled about the idea. This <a href="http://stackoverflow.com/questions/3748/storing-images-in-db-yea-or-nay">StackOverflow discussion</a> has some great points on both sides of the debate - I'll let you draw your own conclusions. So I started to devise a file storage solution like the one I created for my Yellow Pages startup, until I remembered the SQL Server feature that handles exactly this situation. Clearly Microsoft has run across this situation themselves and felt that a comprehensive solution was needed. So I rolled up my sleeves and started playing with the unfamiliar technology - a pursuit that's always fun but also frustrating. This was no exception.</div>
<div class="para">Up next - <a href="http://blog.ramsoftsolutions.com/2014/09/processing-sql-server-filestream-data_22.html">Processing SQL Server FILESTREAM Data, Part 2 - The Setup</a></div>Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com0tag:blogger.com,1999:blog-1457816901612635732.post-90519601561927688002014-06-16T16:04:00.001-07:002014-09-20T14:10:41.902-07:00A Recipe For Password Security<a href='http://www.codeproject.com/script/Articles/BlogFeedList.aspx?amid=7544' rel='tag' style='display:none;'>CodeProject</a>
Several months ago I helped architect a password security scheme for a client. During that process I learned quite a bit about how to encrypt passwords in a secure fashion.<br />
<h1>
Encryption vs. Hashing</h1>
Most developers have heard the term "encryption", which means that data is encoded in such a way that it is not human-readable. But in the context of password security the word “encryption” implies that the encoding can be decoded, that is it’s a “two-way” encryption. While it may be advantageous to decode a user’s password, especially in situations where they have forgotten it, it opens up a security hole. Simply put, if someone attacking your security implementation can guess the algorithm and parameters used to encrypt passwords they can then decrypt all the passwords in your system! At this point you have the equivalent of passwords stored in your system in plaintext – not an excellent approach.<br />
<br />
A much more secure method for storing encrypted passwords is to use a cryptographically secure hash<sup><a href="#p21">1</a></sup>. A “hash” is an algorithm that will take a block of data and from that information generate a value such that if any of the data is changed the hashed value will change as well. The block of data is generally called a “message” and the hashed value is called a “digest”. What is valuable about cryptographic hashes with regard to password security is that they are “one-way”, that is once the password has been hashed it cannot be decrypted back to its original plaintext form. This eliminates the security vulnerability that exists with two way encryption.<br />
<br />
By now I’m sure some of you have thought, “Great, if I have this hashed value how to I validate that against the plaintext password typed in by the user?” The answer is, you don’t. When the user types in their password you hash the value that they entered using the same hash algorithm. You then compare that hashed value with the hashed password stored in your system. If they match then the user is authenticated.<br />
<h1>
Adding Some Salt</h1>
So we now have a process for storing passwords in our system in a secure form that cannot be decrypted, thus closing the door that allows attackers access to all the passwords stored in the system. But determined attackers are not so easily thwarted. They will use a rainbow of methods to gain access to your systems, which segues (in a ham-handed fashion) into the next topic, rainbow tables.<br />
<br />
Since they can no longer decrypt your passwords attackers will try the next best thing. They’ll take a large list of common words and passwords and hash them using some of the well-known standard algorithms. They’ll then compare this list of hashed words to your password list. Any matches will immediately indicate a successful password search. Given users’ penchant for commonly used passwords the chances are good that the attacker will end up with quite a few successes.<br />
<br />
The generally accepted practice for prohibiting this practice is to use a “password salt”<a href="#p22"><sup>2</sup></a>. A salt value is just a randomly generated value that is added to the user’s password before hashing. The salt value is then stored with the user’s hashed password so that the authentication method can use it to hash a password entered by the user.<br />
<br />
Now I’m sure some of you are wondering how this prevents rainbow table attacks if the salt value is easily accessible. What the salt value does is require the attacker to regenerate all the values in their rainbow table using the specified salt value. Even if they have a match it will only work for the one user for which that particular salt value was used. While it doesn’t prevent a successful attack it certainly limits it to one success and makes it very slow and cumbersome for the attacker to make additional attempts on other passwords.<br />
<h1>Needs Some Pepper</h1>
So how can we make it even more difficult for the determined attacker? Well, we can add a “secret salt” value not stored in the database to the password before we hash it. This value would be well known to the system so that it can reproduce it as necessary for authentication but would not be stored in the database. This type of value is commonly known as a “pepper” value. The fact this it is not published or stored makes it even more difficult for an attacker to guess what the plaintext value was before hashing. Unless they have access to the source code for generating the pepper value they may never be able to generate a successful rainbow table.<br />
<h1>Simmer Slowly</h1>
So it seems like we’ve covered all the bases. But we can’t forget about Moore’s Law<a href="#p23"><sup>3</sup></a>. As CPUs and GPUs get faster and faster it becomes easier to generate multiple rainbow tables so that an attacker can take many guesses at an encrypted password list. What’s a poor, security-minded developer to do?<br />
<br />
Well, how about we purposely slow them down?<br />
<br />
There are several well known cryptographic hash algorithms<a href="#p24"><sup>4</sup></a>, such as the Message Digest derivatives (MD2, MD4, MD5) and the Secure Hash Algorithms from the National Security Agency (SHA-1, SHA-256) but many of these were designed to work quickly. In some cases, like MD5, the algorithm is considered “cryptographically broken”<a href="#p25"><sup>5</sup></a>. What we really need is a hash algorithm that can be adapted so that it is slow enough to discourage the generation of multiple rainbow tables but fast enough to hash a password quickly after a user types it in for authentication.<br />
<br />
Enter bcrypt<a href="#p26"><sup>6</sup></a>. Bcrypt is a hashing function based on the well-regarded Blowfish encryption algorithm that includes an iteration count to make it process more slowly. Even if the attacker knows that bcrypt is the algorithm in use if a properly selected iteration count is employed it renders the generation of rainbow tables very expensive. Furthermore, the iteration count is stored in the hashed result value so it’s forward compatible; that is as computing power continues to increase the iteration count can be increased and applied to existing password hashes so that the generation of rainbow tables continues to be expensive.<br />
<h1>A Spicy Meatball</h1>
So by using a combination of the right spices (salt and pepper) and the proper cook time (iterations) we can end up with an excellently prepared plate of hash. It’s not perfect - no security approach ever is - but we can certainly make our systems less vulnerable to the point where an attacker will look for victims that are less well-protected. And that’s all we can really hope for, that they look somewhere else.<br />
<h1>Additional References</h1>
<a href="http://blog.codinghorror.com/youre-probably-storing-passwords-incorrectly/" target="_blank">Coding Horror: You're Probably Storing Passwords Incorrectly</a>
<hr />
<a href="#" name="p21"><b>1</b></a>. <a href="http://en.wikipedia.org/wiki/Cryptographic_hash_function">http://en.wikipedia.org/wiki/Cryptographic_hash_function</a><br />
<a href="#" name="p22"><b>2</b></a>. <a href="http://en.wikipedia.org/wiki/Salt_(cryptography)">http://en.wikipedia.org/wiki/Salt_(cryptography)</a><br />
<a href="#" name="p23"><b>3</b></a>. <a href="http://en.wikipedia.org/wiki/Moores_law">http://en.wikipedia.org/wiki/Moores_law</a><br />
<a href="https://www.blogger.com/null" name="4"><b>4</b></a>. <a href="http://en.wikipedia.org/wiki/Cryptographic_hash_function#Cryptographic_hash_algorithms">http://en.wikipedia.org/wiki/Cryptographic_hash_function#Cryptographic_hash_algorithms</a><br />
<a href="#" name="p25"><b>5</b></a>. <a href="http://en.wikipedia.org/wiki/MD5">http://en.wikipedia.org/wiki/MD5</a><br />
<a href="#" name="p26"><b>6</b></a>. <a href="http://en.wikipedia.org/wiki/Bcrypt">http://en.wikipedia.org/wiki/Bcrypt</a>
Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com0tag:blogger.com,1999:blog-1457816901612635732.post-32776625574666040962014-06-15T17:38:00.000-07:002014-09-23T06:29:27.631-07:00Throwing a Great Block<a href='http://www.codeproject.com/script/Articles/BlogFeedList.aspx?amid=7544' rel='tag' style='display:none;'>CodeProject</a>
Last year I was working on a cloud-hosted Windows service for a client that contained an application-specific logging implementation. The existing architecture had log entries posted at various process points, i.e. file discovery, pickup, dropoff, and download. The log code would post a message to the Microsoft Messaging Queueing service (MSMQ) and a separate database writer service would dequeue those messages and post them to a series of tables in SQL Server.<br />
<br />
<h1>
Lagging The Play</h1>
While this setup worked perfectly well it had one minor issue - the queueing of a log message to MSMQ happened sequentially. That means that while the service was attempting to post a log message to the queue all other file processing was temporarily suspended. Since posting a log message to MSMQ means you're performing an inter-process communication there will be a noticeable lag imposed on the calling thread. Add to that the possibility that the MSMQ service could be located on another server and you've now imposed network lag time on the calling process as well. That's potentially alotta-lag! In the worst possible case, if MSMQ cannot be reached for some reason file processing could be suspended for a very long time. For a platform that expects to be able to process thousands of messages a day this was clearly not going to work as a long term solution. However, the client wanted to retain the use of MSMQ as a persistent message forwarding mechanism so that if the writer service was unavailable the log messages would not end up getting lost.<br />
<br />
<h1>
Block For Me</h1>
It seemed clear that what was needed was some way for the service to save log messages internally for near-term posting to MSMQ in a way that would minimally impact file processing. What came to mind initially was to have an internal Queue object on which the service could store log messages that could be dequeued and posted to MSMQ by another thread. It's a classic Producer-Consumer pattern<a href="#1"><sup>1</sup></a>. While this is a threading implementation that is not of surpassing difficulty to implement it has some subtleties that make it non-trivial. First, all access to the Queue object has to be thread-safe. Second, the MSMQ posting thread needs to enter a low-CPU-load no-operation loop while it's waiting for a log message to be queued. Wouldn't it be nice if there was something built into the .Net Framework to do all this?<br />
<br />
Well, sometimes Microsoft gets it right. In the .Net Framework 4 release Microsoft added something called a Blocking Collection<a href="#2" name="top2"><sup>2</sup></a> that does exactly what we needed. It allows for thread-safe Producer-Consumer patterns that do not consume CPU resources when there is nothing on the queue.<br />
<br />
Here's an example of how to implement it in a simple console application.<br />
<br />
First, we'll need a message class. In the service for the client the log information message was more complex, but this should give you the general idea.<br />
<br />
<script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
namespace BlockingCollectionExample
{
class MyMessage
{
public int MessageId { get; set; }
public string Message { get; set; }
public string ToString()
{
return string.Format("Message with ID {0:#,##0} and value {1}.", MessageId, Message);
}
}
}
]]></script>
The real "meat" of the operation is in the class that encapsulates the blocking collection. Here's the first portion of the class definition.<br />
<script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
using System.Collections.Concurrent;
using System.Threading;
namespace BlockingCollectionExample
{
class MyQueue : IDisposable
{
private BlockingCollection<MyMessage> messageQueue;
private Thread dequeueThread;
bool stopped = true;
bool isStopping = false;
public MyQueue()
{
messageQueue = new BlockingCollection<MyMessage>(new ConcurrentQueue<MyMessage>());
dequeueThread = new Thread(new ThreadStart(zDequeueMessageThread));
dequeueThread.Name = "TransactionPostThread";
dequeueThread.Start();
stopped = false;
}
~MyQueue()
{
Dispose(true);
}
...
</script>
You'll notice that the class implements the IDisposable interface. This is so that the thread that dequeues the messages from the blocking collection can clean up after itself. This will be seen in another section of the code for this class.<br />
<br />
You'll also notice that when the BlockingCollection is defined we specify the class of objects that will be placed on the collection. However, when we instantiate the collection we signify that it should use a ConcurrentQueue object as the backing data store for the blocking collection. This ensures that the items placed in the collection will be handled in a thread-safe manner on a first-in, first-out (FIFO) basis.<br />
<br />
The finalizer method merely calls our Dispose method with a parameter indicating that this was called from the class' destructor, a common patterm for IDisposable implementations<a href="#3" name="top3"><sup>3</sup></a>. The Dispose methods will be shown in their entirety later in this post.<br />
<br />
<script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
public void AddLog(MyMessage message)
{
Console.WriteLine("Enqueueing: " + message.ToString());
messageQueue.Add(message);
}
private void DequeueMessageThread()
{
try
{
while (true)
{
MyMessage message = messageQueue.Take();
Console.WriteLine("Dequeueing: " + message.ToString());
if (messageQueue.IsCompleted)
{
break;
}
}
}
catch (InvalidOperationException)
{
// if invalid op it's because queue was completed
}
catch (ThreadAbortException)
{
// Thread aborted due to queue issue, ignore
}
catch (Exception)
{
throw;
}
}
...
]]></script>
The AddLog method is very simple; it invokes the blocking collection's Add method to enqueue the message in a thread safe manner. The DequeueMessageThread method appears to be an endless loop that keeps attempting to dequeue a message, causing a CPU spike from the tight looping. But here's where the magic of the blocking collection comes into play. The Take method of the blocking collection will enter into a low-CPU wait state if nothing is found on the queue, blocking the loop from proceeding. As soon as a message is enqueued the Take method will return from the wait state and the loop will proceed. Note that the Take mehod will also return immediately if the blocking collection has been closed down, indicating completion, hence the IsCompleted check right after the call.<br />
<br />
The exception handler in the method captures two specific exceptions:<br />
<ol>
<li>The InvalidOperationException will be signaled if the blocking collection is stopped. We'll see this in the Dispose method;</li>
<li>The ThreadAbortException will be signaled if the thread had to be killed because the Dispose method timed out waiting for the thread to finish.</li>
</ol>
<script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
public void Dispose()
{
Dispose(false);
}
private void Dispose(bool fromDestructor)
{
isStopping = true;
int logShutdownTimeout = 30000;
Console.WriteLine("Shutting down queue. Waiting for dequeue thread completion.");
// Signal queue that we're shutting down
messageQueue.CompleteAdding();
// Wait for thread to complete before exiting
do
{
if (!dequeueThread.Join(logShutdownTimeout))
{
// Queue thread may be stuck. Check for items in queue and kill thread if empty
if (messageQueue.Count == 0)
{
System.Diagnostics.Debug.Print("Aborting thread");
dequeueThread.Abort();
break;
}
}
} while (dequeueThread.IsAlive);
Console.WriteLine("Dequeue thread complete.");
if (!fromDestructor)
{
GC.SuppressFinalize(this);
}
stopped = true;
isStopping = false;
}
]]></script>
In this code snippet the first Dispose method is our public interface that satisfies the requirement for IDisposable implementation. It simply calls our private Dispose method that takes a parameter indicating whether it was called from the class destructor method.<br />
<br />
The second private Dispose method is where some housekeeping for the blocking collection and dequeue thread happens. First we call the blocking collection's CompleteAdding method. This will disallow any further additions to the queue, minimizing the chance that the dequeue thread will never end because messages continue to be added. We then attempt to wait for the thread to complete by calling the thread's Join method, specifying a timeout value for the thread. If the thread is not complete within the specified timeout we forcibly destroy it and exit. Finally, if called from the class' destructor we can suppress the finalize method of the garbage collector.<br />
<br />
To utilize a producer-consumer queue like this one is quite simple:<br />
<script class="brush: c-sharp" type="syntaxhighlighter"><![CDATA[
class Program
{
static void Main(string[] args)
{
using (MyQueue queue = new MyQueue())
{
for (int msgIdx = 1; msgIdx < 101; msgIdx++)
{
queue.AddLog(new MyMessage
{
MessageId = msgIdx,
Message = string.Format("Message text # {0:#,##0}", msgIdx)
});
}
}
}
}
]]></script>
The using statement ensures that the queue's Dispose method is invoked upon completion, thereby stopping the dequeing thread. When executed in a loop like this one that enqueues 100 messages the tail end of the output looks like this:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 92 and value Message text # 92.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 93 and value Message text # 93.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 94 and value Message text # 94.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 88 and value Message text # 88.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 89 and value Message text # 89.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 90 and value Message text # 90.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 91 and value Message text # 91.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 95 and value Message text # 95.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 96 and value Message text # 96.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 97 and value Message text # 97.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 98 and value Message text # 98.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 92 and value Message text # 92.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 93 and value Message text # 93.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 94 and value Message text # 94.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 95 and value Message text # 95.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 99 and value Message text # 99.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Enqueueing: Message with ID 100 and value Message text # 100.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 96 and value Message text # 96.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 97 and value Message text # 97.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 98 and value Message text # 98.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 99 and value Message text # 99.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeueing: Message with ID 100 and value Message text # 100.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Shutting down queue. Waiting for dequeue thread completion.</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: small;">Dequeue thread complete.</span><br />
<br />
As you can see the dequeue process slightly lags the enqueue process, as you would expect for processes running in separate threads. The messages are interspersed as the threads compete for the shared resource.<br />
<br />
<h1>
Finishing It Off</h1>
So what we've demonstrated is a way to implement a producer-consumer pattern without writing a lot of thread management code. While this pattern is not applicable in a great many situations it certainly has its uses. Any time you need to queue up items for processing but don't want to slow down the primary process give this pattern a try.<br />
<br />
<hr />
<a href="https://www.blogger.com/null" name="1"><b>1.</b></a> <a href="http://en.wikipedia.org/wiki/Producer-consumer_problem">http://en.wikipedia.org/wiki/Producer-consumer_problem</a><br />
<a href="https://www.blogger.com/null" name="2"><b>2.</b></a> <a href="http://msdn.microsoft.com/en-us/library/dd267312.aspx">http://msdn.microsoft.com/en-us/library/dd267312.aspx</a><br />
<a href="https://www.blogger.com/null" name="3"><b>3.</b></a> <a href="http://stackoverflow.com/a/538238/49954">http://stackoverflow.com/a/538238/49954</a>Bob Mchttp://www.blogger.com/profile/10012849048492940436noreply@blogger.com0