Lambada Calculus - Erik Öjebo.seA Programming Blog by Erik Öjebo869http://www.erikojebo.se/Code/Details/869webmaster@erikojebo.seQuick Tip: Day name from Date in Excel for a Given Culture<p>In Excel, If you want to format to format a date using weekday names or similar, the default format has an implied language. If you explicitly want to specify which language/culture you want to use in the format, you can use the following formula:</p> <p> =TEXT(B2;&quot;[$-sv-SE]DDD&quot;)</p> <p>This will turn an Excel date in to the Swedish short name of that weekday, for example &quot;M&#229;n&quot; for a date representing a monday.</p>Thu, 04 May 2023 13:52:23 +02002023-05-04T13:52:23+02:00868http://www.erikojebo.se/Code/Details/868webmaster@erikojebo.seQuick Tip: Checking Password Expiration Date for an Active Directory Account<p>If you are wondering when your current password expires, use the following command:</p> <p>`Net user USERNAME /domain`</p> <p>where USERNAME is replaced with the actual username of the account.</p>Wed, 29 Jun 2022 09:10:59 +02002022-06-29T09:10:59+02:00867http://www.erikojebo.se/Code/Details/867webmaster@erikojebo.seQuick Tip: Excel VLOOKUP When Formatting Differs<p>Basic usage<br />-----------<br />The Excel VLOOKUP formula is incredibly useful, but a bit finicky to get right if you do not use it often enough.</p> <p>The basic formula follows this format:=VLOOKUP(A1;Sheet2!A:B;2;FALSE)</p> <p>### Parameters:</p> <p>1. The lookup value<br />2. The range to use, in which the lookup column must be the first column of the range<br />3. The column index in the lookup range where the actual value to use is found. Remember that the lookup column is counted as 1, so the next column to the right is column 2, and so on.<br />4. Range lookup (Exact or approximate match). Use FALSE for an exact match.</p> <p>&lt;br/&gt;</p> <p>### Important points to remember:</p> <p>* It is possible to specify the lookup range in the format A:F, which means that the entire columns are included in the range. I.e. you do not have to bother with specifying row numbers unless you actually need it.<br />* If you do specify a range with line numbers, remember that you probably want to anchor the range with A$1:F$100 or $A$1:$F$100, so that you can drag copy the formula to a range without offsetting the lookup range</p> <p><br />&lt;br/&gt;</p> <p>Differences In Formatting Between Lookup Value and Lookup Column<br />-----------------------<br />If the formatting differs between the lookup value and the lookup column, for example if one is a number formatted as a string and the other is formatted as a number, you will get an #N/A error in the VLOOKUP.</p> <p>If the lookup value is a number and the lookup column is text, you can use the TEXT function to fix the lookup:<br />=VLOOKUP(TEXT(A1;&quot;0&quot;);Sheet2!A:C;2;FALSE)</p> <p>If the situation is reversed, use the NUMBERVALUE function instead:<br />=VLOOKUP(NUMBERVALUE(A1);Sheet2!A:C;2;FALSE)</p> <p>REF Errors<br />---------------</p> <p>If you end up getting a #REF error, this probably means that your lookup value was found in the lookup column, but the specified column number points to a column that is outside of the actual range. Let&#39;s say that you specify the range A:B but then you use the column number 3, which would actually point to the C column. Since the C column is not included in the range, Excel will give you a #REF error.</p> <p>If the lookup value was not found, the error would be #N/A instead.</p>Thu, 20 Jan 2022 17:14:34 +01002022-01-20T17:14:34+01:00866http://www.erikojebo.se/Code/Details/866webmaster@erikojebo.seBasic Administration of Local Users using PowerShell for Windows Server 2016<p>This is a tiny cheat sheet for basic administration of local users on a Windows Server 2016 machine using PowerShell:</p> <p>### Listing local users and their parameters<br />`Get-LocalUser | Format-List`</p> <p>### Listing available properties to modify<br />`Get-Help Set-LocalUser`</p> <p>### Setting a parameter for a local user<br />`Set-LocalUser somenamehere -Password somepassword`</p> <p>When setting boolean parameters, remember to use $false and $true, rather than false and true. Otherwise you will get an error due to passing a string instead of a boolean.</p>Sat, 18 Jan 2020 13:35:18 +01002020-01-18T13:35:18+01:00802http://www.erikojebo.se/Code/Details/802webmaster@erikojebo.seQuick Tip: Capturing Localhost Traffic using Fiddler<p>If you are having trouble capturing traffic from localhost to localhost when debugging HTTP traffic during development, try using the host `localhost.fiddler` instead of `localhost` in your request.</p> <p>However, you have to have Fiddler started and actively capturing traffic (i.e. not paused) for the host to be resolved properly.</p>Tue, 20 Sep 2016 12:34:01 +02002016-09-20T12:34:01+02:00795http://www.erikojebo.se/Code/Details/795webmaster@erikojebo.seQuick Tip: Opening a Terminal in the Current Directory from Windows Explorer<p>Ever browsed you file system using Windows Explorer, and wanted to lauch a terminal window in that same directory? There is a hidden, but really neat, shortcut to open cmd or powershell from Explorer: Just go to the address bar and type `cmd` or `powershell` and there is your terminal with the working directory set to the one you had open in Windows Explorer!</p>Tue, 05 Jul 2016 20:32:16 +02002016-07-05T20:32:16+02:00794http://www.erikojebo.se/Code/Details/794webmaster@erikojebo.seQuick Tip: Configuring Yasnippet to Inherit Snippets from Another Mode<p>To share common snippets between similar emacs modes i have the following yasnippet directory setup:</p> <p> /snippets<br /> /text-mode<br /> /some-mode<br /> /someother-mode</p> <p>I then place all shared snippets in the text-mode folder which enables me to use them from the child modes. If you add a new mode for which you do not have any specific snippets, but you still want that mode to inherit snippets from the parent mode, simply create an empty child directory. This will let yasnippet know that you consider the new mode a child mode of the mode with the shared snippets.</p> <p>As a side note, if you have your configuration committed to a git repository, the empty directory will not be added to the repository, since Git really only cares about files. To remedy this, add a dummy file to the directory, for example a file called `.keep`, or something similar.</p>Thu, 16 Jun 2016 09:02:45 +02002016-06-16T09:02:45+02:00792http://www.erikojebo.se/Code/Details/792webmaster@erikojebo.seAdding a Prometheus Metrics Endpoint to an ASP.NET MVC Application<p>The Prometheus-net nuget package contains a simple web server for hosting a metrics endpoint in process, but for an ASP.NET application it makes more sense just to add a metrics enpoint to the application.</p> <p>To add such an endpoint to an ASP.NET MVC application you simply add a MetricsController, that looks like this:</p> <p> public class MetricsController : Controller<br /> {<br /> public ActionResult Index()<br /> {<br /> var acceptHeader = Request.Headers.Get(&quot;Accept&quot;);<br /> var acceptHeaders = acceptHeader?.Split(&#39;,&#39;);</p> <p> Response.ContentType = ScrapeHandler.GetContentType(acceptHeaders);</p> <p> ScrapeHandler.ProcessScrapeRequest(<br /> DefaultCollectorRegistry.Instance.CollectAll(), <br /> Response.ContentType, <br /> Response.OutputStream);</p> <p> HttpContext.ApplicationInstance.CompleteRequest();</p> <p> // Irrelevant, since it is after the CompleteRequest call<br /> return null;<br /> } <br /> }</p> <p>You can now go to the /metrics endpoint of your site and see all collected metrics. Remember that the page will be empty unless you actually do define some metrics (counters, guages etc). Check out the [prometheus-net github page](https://github.com/andrasm/prometheus-net) for details on how to do that.</p> <p>Note that Prometheus expects to be configured with a hostname (and port). It does not seem to like something like 192.160.0.1/foobar, but instead wants something like 192.168.0.1:12345. So, if you are working on your local machine, make sure to set up a proper IIS site with a binding to a custom port, rather than using IIS Express or hosting the site under the default IIS website.</p>Fri, 27 May 2016 21:31:58 +02002016-05-27T21:31:58+02:00791http://www.erikojebo.se/Code/Details/791webmaster@erikojebo.seDocker Toolbot: Adding a Shared Folder to the Docker Host Virtual Machine<p>If you are using Docker Toolbox (boot2docker) and want to make files available to your containers, or have the containers write data to permanent storage on your Windows system outside of the container, you need to first share a folder from your Windows system with the Docker host virtual machine.</p> <p>This is a guide to create a shared folder in Virtual Box and then mount that folder in the virtual machine.</p> <p>First, make sure the docker host VM is stopped:</p> <p> docker-machine stop</p> <p>Create the VirtualBox shared folder for the virtual machine:</p> <p> cd &#39;c:\Program Files\Oracle\VirtualBox\&#39;</p> <p> .\VBoxManage.exe sharedfolder add &quot;&lt;your-vm-name&gt;&quot; --name &quot;&lt;some_name&gt;&quot; --hostpath &quot;C:\Some\Directory&quot;</p> <p>For example:</p> <p> \VBoxManage.exe sharedfolder add &quot;default&quot; --name &quot;foo&quot; --hostpath &quot;C:\Foo&quot;</p> <p>Start the Docker host VM and ssh into it:</p> <p> docker-machine start<br /> docker-machine ssh</p> <p>The folder where you want to mount the shared folder must exist before mounting:</p> <p> mkdir /home/docker/foo</p> <p>Mount the shared folder:</p> <p> sudo mount -t vboxsf -o uid=1000,gid=50 your-shared-folder-name /home/docker/foo</p> <p>Make sure everything worked by checking that the folder has the expected contents:</p> <p> ls /home/docker/foo</p> <p>If everything worked you have to make this permanent by adding the mount command to the docker profile:</p> <p> vi /mnt/sda1/var/lib/boot2docker/profile</p> <p>Add the `mkdir` and `mount` commands from above (no sudo required though) to the end of the profile file.</p> <p>Make sure that the profile changes worked by restarting the docker machine and checking the folder contents:</p> <p> docker-machine stop<br /> docker-machine start<br /> ls /home/docker/foo</p> <p><br />Note: The docker-machine commands above assumes that the currently active docker machine is the one to which you want to add the shared folder, otherwise you need to specify the machine name when using the docker-machine commands.<br /></p>Wed, 25 May 2016 20:43:57 +02002016-05-25T20:43:57+02:00790http://www.erikojebo.se/Code/Details/790webmaster@erikojebo.seMinimal VI Newbie Survival Guide<p>If you find yourself ssh:ing into a server where VI is the only text editor available, here is a minimal survival guide to get your editing job done:</p> <p>* Open your file in VI, for example `vi /path/to/your/file`<br />* Move the cursor to the position where you want to start editing using the arrow keys<br />* Trigger insert mode by pressing `i`<br />* Do your editing<br />* Exit insert mode by pressing `Esc`<br />* Exisd and save your changes by executing the command `:wq` (write and quit)<br />* If you want to exit without saving any changes execute the command `:q!` instead</p> <p>&lt;br /&gt;<br />For the advanced newbie:</p> <p><br />* Copy (yank) a line by executing the command `yy`<br />* Paste the yanked line by positioning the cursor and executing the `p` or `P` command<br />* Go to end of file: `G`<br />* Go to end of line: `$`<br />* Go to beginning of line: `0`<br />* Go to end of word: `e`<br />* Go to beginning of word: `b`<br />* Go to next word: `w` </p>Wed, 25 May 2016 20:19:32 +02002016-05-25T20:19:32+02:00749http://www.erikojebo.se/Code/Details/749webmaster@erikojebo.seQuick Tip: Creating Symlinks with PowerShell (or actually with cmd...)<p>To create a symlink when in a PowerShell session, you can use the following command:</p> <p> cmd /c mklink &lt;link_path_to_create&gt; &lt;original_path&gt;</p> <p>If you want to symlink a directory rather than a file, use the following command:</p> <p> cmd /c mklink /D &lt;link_path_to_create&gt; &lt;original_path&gt;</p> <p>See https://technet.microsoft.com/sv-se/library/cc753194(v=ws.10).aspx for more info.</p>Tue, 24 Nov 2015 17:49:56 +01002015-11-24T17:49:56+01:00748http://www.erikojebo.se/Code/Details/748webmaster@erikojebo.seQuickTip: PowerShell equivalent of find | xargs grep (find in files)<p>To search files in a given directory for a string, you can use the following PowerShell command:</p> <p> Get-ChildItem -Recurse -Include *.* | Select-String &quot;text to search for&quot;</p>Fri, 20 Nov 2015 10:57:22 +01002015-11-20T10:57:22+01:00734http://www.erikojebo.se/Code/Details/734webmaster@erikojebo.seTroubleshooting: "Could not load file or assembly..."<p>Here is a quick reminder to the future me:</p> <p>When troubleshooting an assembly load problem where the application is trying to load an incorrect version of an assembly, related to an upgrade of that assembly, remember to not only do a Clean and Rebuild, but also a straight up delete of all the binaries in the bin folder. That makes sure that there are no old binaries with incorrect versions laying around to cause trouble.</p>Mon, 19 Oct 2015 22:54:45 +02002015-10-19T22:54:45+02:00733http://www.erikojebo.se/Code/Details/733webmaster@erikojebo.seDisabling the Paste Protection in the Firefox DevTools<p>By default, Firefox does not allow you to paste stuff into the dev tools console. You have to manually type &quot;allow paste&quot; into the console to enable pasting.</p> <p>To get rid of this protection, go to `about:config` and set the setting `devtools.selfxss.count` to some high value, like 100.</p>Mon, 12 Oct 2015 08:53:24 +02002015-10-12T08:53:24+02:00731http://www.erikojebo.se/Code/Details/731webmaster@erikojebo.seUsing Paths Relative to Script File Directory in PowerShell<p>If you are not careful, using paths in PowerShell scripts have a tendency to make your script brittle:</p> <p> - Absolute paths: The script probably fails when executed on a different machine<br /> - Relative paths: The script fails if executed with a different working directory</p> <p>&lt;br /&gt;<br />To remedy this there is a magical and wonderful variable in PowerShell called `$PSScriptRoot` (in PowerShell 3+), which contains the path to the current script file. By using this you know that you can safely access files relative to the actual script file, no matter the working directory, and without having to hard code any absolute paths into the script.</p> <p>For example:</p> <p> Get-ChildItem $PSScriptRoot\some\directory</p> <p>If you are stuck on PowerShell 2 or below, you can use the following snippet instead, to get the same behaviour:</p> <p> $PSScriptRoot = Split-Path $MyInvocation.MyCommand.Path -Parent</p>Thu, 08 Oct 2015 21:11:52 +02002015-10-08T21:11:52+02:00730http://www.erikojebo.se/Code/Details/730webmaster@erikojebo.seCompiling Visual Studio Projects from PowerShell without Hard-coding the MSBuild Path<p>Msbuild makes it quite easy to compile .NET solutions or projects from the command line, but it is not available in the path by default, and the actual path to MSBuild is kind of unfriendly...</p> <p>To compile a project from a PowerShell script without having to hard-code the path to MSBuild into the script, you can do the following:</p> <p> Write-Host &quot;Compiling project...&quot;</p> <p> $ProjectFilePath = &quot;$PSScriptRoot\some\path\relative\to\script\directory.csproj&quot;</p> <p> # Get the actual MSBuild path from the registry<br /> $MSBuildToolsPathRegistryValue = Get-ItemProperty `<br /> -Path &quot;HKLM:\SOFTWARE\Microsoft\MSBuild\ToolsVersions\4.0&quot; `<br /> -Name &quot;MsBuildToolsPath&quot;<br /> $MSBuildDirectory = $MSBuildToolsPathRegistryValue.MSBuildToolsPath</p> <p> &amp; &quot;$MSBuildDirectory\msbuild.exe&quot; $ProjectFilePath &#39;/t:Clean;Rebuild&#39; /p:Configuration=Release<br /></p>Thu, 08 Oct 2015 21:02:35 +02002015-10-08T21:02:35+02:00721http://www.erikojebo.se/Code/Details/721webmaster@erikojebo.seFailing a TeamCity Build from a PowerShell Build Step<p>One of the most versatile and useful build steps in TeamCity for a Windows based shop is the PowerShell build step. Unfortunately, there is a quirk in the behaviour of the PowerShell build step that occurs if you use a PowerShell script file, rather than an inline script in the build step definition. The problem is that PowerShell does not return the actual exit code from the script when invoked with the -File parameter (see &lt;https://connect.microsoft.com/PowerShell/feedback/details/777375/powershell-exe-does-not-set-an-exit-code-when-file-is-used&gt;) which in turn results in TeamCity not being able to detect if the script failed. So, a failing script does not break the build. This is a major issue.</p> <p>Fortunately TeamCity provides a couple of different ways to solve this problem. </p> <p>## Fail on Error Output</p> <p>One easy, but a bit clumsy, way is to add a Build Failure Condition that fails the build when an error message is logged by the build runner (&lt;https://confluence.jetbrains.com/display/TCD9/Build+Failure+Conditions#BuildFailureConditions-Commonbuildfailureconditions&gt;), no matter why or what step produced the error. This will solve the problem, as long as the script actually writes to stderr (error output) when the error occurs. It might also mean that the build is failed for errors that actually did not matter that much.</p> <p>One way to control the weight given to errors from a PowerShell script is to configure the Error Output parameter in the build step to either `warning` or `error`. For the script to fail the build the Error Output parameter must be set to `error` (&lt;https://confluence.jetbrains.com/display/TCD9/PowerShell&gt;). So, for scripts that are non-vital to the build where an error could be acceptable, set the Error Output as warning, which means that the output will appear as a warning in the actual build log. Setting it to `error` combined with the failure condition mentioned above will cause the build to fail when the script writes anything to stderr.</p> <p>## Service Messages</p> <p>To get more fine grained control over when a script should fail the build TeamCity provides another approach: Service Messages (&lt;https://confluence.jetbrains.com/display/TCD9/Build+Script+Interaction+with+TeamCity#BuildScriptInteractionwithTeamCity-ReportingBuildProblems&gt;).</p> <p>Through Service Messages the script can write output following a predefined format to stdout (NOT stderr). To fail the build with using this approach you can ouput text following this format:</p> <p> ##teamcity[buildProblem description=&#39;&lt;description&gt;&#39; identity=&#39;&lt;identity&gt;&#39;]</p> <p>Description is required, but Identity is optional information.</p> <p>Here is an example of how that could look in an actual PowerShell script:</p> <p> Try<br /> {<br /> # Do stuff that might fail<br /> }<br /> Catch<br /> {<br /> # Get the current error<br /> $ErrorMessage = $_<br /> <br /> # Write a message to stdout to let TeamCity know that the script failed.<br /> # The exit code is not returned properly to TeamCity due to issues with invoking<br /> # PowerShell script files using -File<br /> # Observe that this message is written with Write-Host rather than Write-Error,<br /> # to make sure the message is written to stdout rather than stderr<br /> Write-Host &quot;##teamcity[buildProblem description=&#39;$ErrorMessage&#39;]&quot;<br /> <br /> Write-Error -Message $ErrorMessage<br /> <br /> exit 1<br /> }</p> <p> exit 0</p>Wed, 12 Aug 2015 09:14:45 +02002015-08-12T09:14:45+02:00719http://www.erikojebo.se/Code/Details/719webmaster@erikojebo.seQuick reference: Replacing web.config App Settings through Config Transform<p>The syntax for web.config transformations is something that I have to look up every time I need to use it, so here is a quick reference.</p> <p>To replace the value an AppSetting value which is declared in the root config (web.config), use the `Replace` transform:</p> <p> &lt;add key=&quot;MyAppSettingToOverride&quot; value=&quot;some value&quot; <br /> xdt:Transform=&quot;Replace&quot; xdt:Locator=&quot;Match(key)&quot; /&gt;</p> <p>IMPORTANT: Both the key itself and the Transform and Locator attributes are case sensitive, so make sure you have the correct casing for the key and that the T and L respectively in Transform and Locator are uppercase.</p> <p>Removing an AppSetting declared in the root config is done via the `Remove` transform (unsurprisingly):</p> <p> &lt;add key=&quot;MyAppSettingToRemove&quot; xdt:Transform=&quot;Remove&quot; xdt:Locator=&quot;Match(key)&quot; /&gt;</p> <p>The transform for adding new settings which were NOT declared in the root config is `Insert`:</p> <p> &lt;add key=&quot;MyAppSettingToAdd&quot; value=&quot;some value&quot; xdt:Transform=&quot;Insert&quot; /&gt;</p> <p>IMPORTANT: You have to add the Transform attribute when adding new settings. If you do not include that attribute the setting will not be included in the final configuration file.</p> <p>To test you configuration, right click the config file you are working on (e.g. web.release.config) and choose `Preview Transform`.</p>Fri, 24 Jul 2015 21:08:43 +02002015-07-24T21:08:43+02:00705http://www.erikojebo.se/Code/Details/705webmaster@erikojebo.seManaging Connection Strings with ASP.NET vNext and Azure Web Apps<p>At the time of writing this post ASP.NET vNext (5) is in beta 4. The googleability is still low, and the majority of the information is incomplete or is written for an earlier beta, often with breaking code changes between the beta versions.</p> <p>One of the areas where I had problems finding information was connection string management in the new configuration system.</p> <p>The problem I wanted to solve was how to define connection strings for the production environment of an open source project, without having to check in any credentials or having to distribute credentials to all team members in some other way. I had seen that the Azure portal has a section for connection strings in the settings of a web app, but I could not find information about how those connection strings were injected into the app config.</p> <p>As it turns out, the best source of information for this kind of stuff is the default project template for a vNext MVC 6 application (NOT the WebAPI template). The default template contains configuration code for lots of basic stuff, including Entity Framework 7 configuration.</p> <p>Enough talk, this is the way to handle connection strings to be able to override them from the Azure portal:</p> <pre class='prettyprint'><br />// Config.json: <p>{<br /> &quot;Data&quot;: {<br /> &quot;DefaultConnection&quot;: {<br /> &quot;ConnectionString&quot;: &quot;Server=.;Database=TaskBoard;User Id=some_user;Password=some_password;&quot;<br /> }<br /> },<br /> &quot;EntityFramework&quot;: {<br /> &quot;ApplicationDbContext&quot;: { <br /> &quot;ConnectionStringKey&quot; : &quot;Data:DefaultConnection:ConnectionString&quot;<br /> }<br /> }<br />}<br /></pre></p> <pre class='prettyprint'><br /> // Startup.cs: <p> public class Startup<br /> {<br /> // For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=398940<br /> public void ConfigureServices(IServiceCollection services)<br /> {<br /> services.AddMvc();</p> <p> services.AddEntityFramework()<br /> .AddSqlServer()<br /> .AddDbContext&lt;BoardContext&gt;();<br /> }</p> <p> public void Configure(IApplicationBuilder app)<br /> {<br /> app.UseMvc()<br /> .UseStaticFiles();<br /> }<br /> }<br /></pre></p> <pre class='prettyprint'><br /> // DbContext class: <p> public class SomeContext : DbContext<br /> {<br /> protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)<br /> {<br /> var config = new Configuration()<br /> .AddJsonFile(&quot;config.json&quot;)<br /> .AddEnvironmentVariables();</p> <p> // The development connection string is defined in the config.json file.<br /> // The connection strings for the Azure web app(s) are defined in the web app<br /> // settings in the Azure portal<br /> optionsBuilder.UseSqlServer(config[&quot;Data:DefaultConnection:ConnectionString&quot;]);</p> <p> base.OnConfiguring(optionsBuilder);<br /> }<br /></pre></p>Tue, 09 Jun 2015 21:39:20 +02002015-06-09T21:39:20+02:00704http://www.erikojebo.se/Code/Details/704webmaster@erikojebo.seFiltering by Test Category in the Visual Studio Test Runner<p>The built in Visual Studio is quite limited, but it has a bit more functionality than it first appears.</p> <p>To filter the test list by a given category, enter the following filter in the search bar of the test runner:</p> <p> Trait:&quot;UITest&quot;</p> <p>![Visual studio test runner with a filter to only include a given test category](/Images/upload/code_blog/vstestrunner_trait.png &quot;Visual Studio test runner with a filter showing only a single test category&quot;)</p> <p>If you instead want to exclude a test category from the list, i.e. show all test which do NOT have the category, prefix the filter with a dash:</p> <p> -Trait:&quot;UITest&quot;</p> <p>These kinds of filters can be combined by simply appending one after another, separated by spaces:</p> <p> -Trait:&quot;UITest&quot; -Trait:&quot;DatabaseTest&quot;<br /></p>Sat, 02 May 2015 14:26:32 +02002015-05-02T14:26:32+02:00703http://www.erikojebo.se/Code/Details/703webmaster@erikojebo.seSending HTTP Requests from PowerShell<p>If you have a Unix/Linux background you are probably familiar with curl och wget, which are very useful utilities for sending HTTP requests from the command line. If you also manage Windows servers in some way you might also have found that Windows has not historically shipped with any proper substitute for curl/wget.</p> <p>PowerShell has revolutionized server management on the Windows side, by embracing the Unix/Linux way of server management. However, it is still quite young so the library of available cmdlets is still growing. In version 4 of PowerShell, Microsoft finally included a cmdlet called Invoke-WebRequest, which does just what its name implies.</p> <p>You can check what version of PowerShell you are running on your system by invoking the following command in PowerShell:</p> <p> $PSVersionTable.PSVersion</p> <p>If you have an earlier version of PowerShell you can upgrade by installing the [Windows Management Framework 4.0](http://www.microsoft.com/en-us/download/details.aspx?id=40855). After installing that, and rebooting your system, you should be able to find the Invoke-WebRequest command.</p> <p>You can now send an HTTP request like this:</p> <p> Invoke-WebRequest &lt;url&gt; -method &lt;method&gt;</p> <p>For example, if you want to delete all indices in a local ElasticSearch instance:</p> <p> Invoke-WebRequest http://localhost:9200/* -method DELETE</p> <p>For more details on how to use the Invoke-WebRequest cmdlet, use the PowerShell cmdlet Get-Help:</p> <p> Get-Help Invoke-WebRequest</p> <p>If you are interested in more usage examples, you can get those via the help as well:</p> <p> Get-Help Invoke-WebRequest -examples | more</p> <p>Piping the output through `more` lets you read the output of the help command page by page, instead of having to scroll back to find the start of the command output.</p> <p>If you would rather read about the command at Technet, use the `-online` flag:</p> <p> Get-Help Invoke-WebRequest -online</p>Sat, 04 Apr 2015 22:19:29 +02002015-04-04T22:19:29+02:00675http://www.erikojebo.se/Code/Details/675webmaster@erikojebo.seImplementing a CSS Slideout Menu<p>There are plenty of tutorials available for how to implement a CSS slideout menu, but unfortunately the majority use far to complicated examples to be useful. Because of this I decided to put an extremely simple example out there for people (like me) who don&#39;t have the patience to wade through loads of unnecessary CSS and markup to find the essential parts.</p> <p>You can find the HTML, CSS and JavaScript for the slideout menu below. I also created a [JSFiddle](http://jsfiddle.net/8u3am70q/), for those of you who want to play around with the code yourselves.</p> <p>To keep the code simple I did not add any vendor prefixes in the CSS, so if you are having trouble viewing the demo, add the vendor prefixes or preferably go grab a decent browser :)</p> <p>## HTML</p> <p> &lt;html&gt;<br /> &lt;head&gt;<br /> &lt;meta charset=&quot;utf-8&quot; /&gt;<br /> &lt;link rel=&#39;stylesheet&#39; href=&#39;style.css&#39; /&gt;<br /> <br /> &lt;script type=&quot;text/javascript&quot; src=&#39;https://code.jquery.com/jquery-2.1.1.js&#39;&gt;&lt;/script&gt;<br /> &lt;script type=&quot;text/javascript&quot; src=&#39;index.js&#39;&gt;&lt;/script&gt;<br /> &lt;/head&gt;<br /> &lt;body&gt;<br /> &lt;nav id=&#39;slideout&#39;&gt;<br /> &lt;a href=&#39;&#39; class=&#39;menu-toggle&#39;&gt;Close&lt;/a&gt;<br /> <br /> <br /> &lt;ul&gt;<br /> &lt;li&gt;Home&lt;/li&gt;<br /> &lt;li&gt;About&lt;/li&gt;<br /> &lt;li&gt;Contact&lt;/li&gt;<br /> &lt;/ul&gt;<br /> &lt;/nav&gt;<br /> &lt;section id=&#39;page-content&#39;&gt;<br /> &lt;a href=&#39;&#39; class=&#39;menu-toggle&#39;&gt;&lt;/a&gt;<br /> <br /> &lt;p&gt;<br /> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse eu dictum tortor. Aenean id sem lobortis, luctus nunc ut, auctor tellus. Mauris et euismod erat. Aenean sollicitudin sapien id tellus convallis, vitae facilisis dui luctus. Cras id tincidunt est, non auctor lectus. Aliquam eleifend, turpis quis vulputate laoreet, risus ligula tincidunt lorem, et mollis sapien lacus at orci. Pellentesque euismod feugiat tortor, ac maximus neque fringilla id. Praesent ut egestas purus. Aenean nec porttitor urna.<br /> &lt;/p&gt;<br /> &lt;/section&gt;<br /> <br /> &lt;/body&gt;<br /> &lt;/html&gt;<br /> </p> <p>## CSS</p> <p> #slideout {<br /> position: absolute;<br /> top: 0px;<br /> left: 0px;<br /> transform: translatex(-100%);<br /> transition: all 200ms ease-in-out;<br /> <br /> background: #cecece;<br /> padding: 20px;<br /> height: 100%;<br /> }<br /> <br /> #slideout.show {<br /> transform: translatex(0%);<br /> transition: all 200ms ease-in-out;<br /> }<br /> </p> <p>## JavaScript</p> <p> $(function () {<br /> $(&#39;.menu-toggle&#39;).click(function () {<br /> $(&#39;#slideout&#39;).toggleClass(&#39;show&#39;);<br /> <br /> return false;<br /> });<br /> });</p>Sun, 23 Nov 2014 22:47:28 +01002014-11-23T22:47:28+01:00674http://www.erikojebo.se/Code/Details/674webmaster@erikojebo.seUsing Grunt to Publish Files via FTP in Active Mode<p>A key part in the automation of tasks in any project is to automate the deployment.</p> <p>If you use Grunt for automation and FTP to deploy to production there are a few different tasks from which to choose. However, not all of these support configuration of Active/Passive mode. In my case I need to be able to deploy in Active mode, so the task I use is [grunt-ftpscript](https://www.npmjs.org/package/grunt-ftpscript).</p> <p>To get started with `grunt-ftpscript` you first install the task into your project directory with `npm install grunt-ftpscript --save-dev`, which adds the necessary line to your `packages.json` file. You can then configure the task in your `gruntfile.js`. Here is an example:</p> <p> module.exports = function(grunt) {<br /> <br /> grunt.initConfig({<br /> pkg: grunt.file.readJSON(&#39;package.json&#39;),<br /> <br /> &#39;ftpscript&#39;: {<br /> publish: {<br /> options: {<br /> host: &#39;myhost.com&#39;,<br /> authKey: &#39;myhost&#39;,<br /> passive: false<br /> },<br /> files: [<br /> {<br /> expand: true,<br /> cwd: &#39;dist&#39;,<br /> src: [<br /> &#39;**/*&#39;,<br /> &#39;!**/exclude.js&#39; // Use ! to exclude files<br /> ],<br /> dest: &#39;/some/path/at/the/destination/ftp/server/&#39;<br /> }<br /> ]<br /> }<br /> }<br /> });<br /> <br /> grunt.loadNpmTasks(&#39;grunt-ftpscript&#39;);<br /> <br /> grunt.registerTask(&#39;default&#39;, [&#39;ftpscript:publish&#39;]);<br /> };</p> <p>The astute reader has probably already noticed that there are no credentials included in the configuration of the ftpscript task. Instead the credentials are stored in a separate file called `.ftppass`. The &#180;.ftppass&#180; file can include multiple sets of credentials and you use the `authKey` value to reference the correct set of credentials.</p> <p>Here is an example of what a `.ftppass` file can look like:</p> <p> {<br /> &quot;myhost&quot;: {<br /> &quot;username&quot;: &quot;someusername&quot;,<br /> &quot;password&quot;: &quot;p4$sw0rd&quot;<br /> }<br /> }</p> <p>Remember to think twice before deciding if you want to put your `.ftppass` file under version control. If you are working on an open source project you had better add `.ftppass` to your `.gitignore` before committing your new changes to your grunt config. :)</p>Sat, 22 Nov 2014 21:28:55 +01002014-11-22T21:28:55+01:00672http://www.erikojebo.se/Code/Details/672webmaster@erikojebo.seImplementing an HTML5 Touch Tap Event as a jQuery Plugin<p>A very important part of implementing web applications for mobile devices is, of course, to handle touch events in way that feels natural to the user.</p> <p>I recently fixed a bug in one of my projects where a swipe up/down to scroll was interpreted as a tap on the item where the user touched the screen to swipe scroll. The bug appeared since the application was listening for the `touchend` event to trigger the &#39;click&#39; logic. The problem with this is that the `touchend` event is fired no matter if the user tapped a single spot on the screen or swiped.</p> <p>To distinguish between a tap and a swipe I decided to track if the user did any movement while during the touch gesture, and if so the click logic was not triggered. Since this is a piece of code I will need in more than one of my projects I made it into a jQuery plugin.</p> <p>Below is the implementation of the jQuery plugin I wrote, which adds an `onTap` function which takes a callback. The callback will be called when the user taps the element(s) without swiping.</p> <p> (function ($) {<br /> $.fn.onTap = function (callback) {<br /> <br /> function currentScrollPosition () {<br /> return $(window).scrollTop();<br /> }<br /> <br /> var touchStartPosition = 0;<br /> var maxTouchMovement = 0;<br /> <br /> this.on(&#39;touchstart&#39;, function () {<br /> maxTouchMovement = 0;<br /> touchStartPosition = currentScrollPosition();<br /> return true;<br /> });<br /> <br /> this.on(&#39;touchend&#39;, function () {<br /> <br /> if (maxTouchMovement &lt; 5) {<br /> callback.call(this);<br /> return false;<br /> }<br /> <br /> return true;<br /> });<br /> <br /> this.on(&#39;touchmove&#39;, function () {<br /> var currentDistanceFromTouchStart = Math.abs(touchStartPosition - currentScrollPosition());<br /> maxTouchMovement = Math.max(maxTouchMovement, currentDistanceFromTouchStart);<br /> <br /> return true;<br /> });<br /> <br /> return this;<br /> }<br /> })(jQuery)</p> <p>The jQuery plugin can now be used by passing in a function which, in the usual jQuery style, can access the element(s) in question through &#39;this&#39;:<br /> <br /> $(function () {<br /> $(&#39;.some_selector_here&#39;).onTap(function () {<br /> // Do something smart with the tapped element<br /> $(this).toggleClass(&#39;tapped&#39;);<br /> })<br /> })<br /> <br />By using the `touchmove` event to find the maximum distance from the starting point the user moved to during the touch gesture we can make sure that even if the user returns to the starting position the gesture is not seen as a tap. Only a touch and release in the same spot without any major movement during the touch is seen as a tap gesture.</p>Fri, 17 Oct 2014 23:14:46 +02002014-10-17T23:14:46+02:00628http://www.erikojebo.se/Code/Details/628webmaster@erikojebo.seSetting the Default Database for a SQL Server User<p>Quite often a given SQL Server user is primarily used with one specific database on a server. To make life a little easier you can set the default database for a user, so that new queries by default are run against that database, etc. Here is how to do it in T-SQL:</p> <p> ALTER LOGIN [SomeLogin] WITH DEFAULT_DATABASE = SomeDatabaseName</p> <p>A word of warning, if you for some reason drop the database which is set as the default for a user then that user will not be able to log in. If that problem occurs you can change the database to log in to in the Options tab of the SQL Server Management Studio login dialog.</p> <p>If you manage to log in to SSMS you can then change the default database for the user back to another database which actually still exists, using the example above.</p>Tue, 04 Mar 2014 20:38:55 +01002014-03-04T20:38:55+01:00627http://www.erikojebo.se/Code/Details/627webmaster@erikojebo.seEmacs Kata #3: HTTP Status Codes<p>Today&#39;s Emacs kata is all about HTTP status codes. The exercise is to convert the status code section of the HTTP 1.1 specification into a bunch of C# public members on a status code class.</p> <p>The input text can be found in [section 10 of the HTTP 1.1 spec](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10). The entire input text is supplied at the bottom of the page.</p> <p>The end result should look like this:</p> <p> public static readonly HttpStatus Continue = new HttpStatus(100, &quot;Continue&quot;);<br /> public static readonly HttpStatus SwitchingProtocols = new HttpStatus(101, &quot;Switching Protocols&quot;);<br /> public static readonly HttpStatus OK = new HttpStatus(200, &quot;OK&quot;);<br /> public static readonly HttpStatus Created = new HttpStatus(201, &quot;Created&quot;);<br /> public static readonly HttpStatus Accepted = new HttpStatus(202, &quot;Accepted&quot;);<br /> public static readonly HttpStatus NonAuthoritativeInformation = new HttpStatus(203, &quot;Non-Authoritative Information&quot;);<br /> public static readonly HttpStatus NoContent = new HttpStatus(204, &quot;No Content&quot;);<br /> public static readonly HttpStatus ResetContent = new HttpStatus(205, &quot;Reset Content&quot;);<br /> public static readonly HttpStatus PartialContent = new HttpStatus(206, &quot;Partial Content&quot;);<br /> public static readonly HttpStatus MultipleChoices = new HttpStatus(300, &quot;Multiple Choices&quot;);<br /> public static readonly HttpStatus MovedPermanently = new HttpStatus(301, &quot;Moved Permanently&quot;);<br /> public static readonly HttpStatus Found = new HttpStatus(302, &quot;Found&quot;);<br /> public static readonly HttpStatus SeeOther = new HttpStatus(303, &quot;See Other&quot;);<br /> public static readonly HttpStatus NotModified = new HttpStatus(304, &quot;Not Modified&quot;);<br /> public static readonly HttpStatus UseProxy = new HttpStatus(305, &quot;Use Proxy&quot;);<br /> public static readonly HttpStatus (Unused) = new HttpStatus(306, &quot;(Unused)&quot;);<br /> public static readonly HttpStatus TemporaryRedirect = new HttpStatus(307, &quot;Temporary Redirect&quot;);<br /> public static readonly HttpStatus BadRequest = new HttpStatus(400, &quot;Bad Request&quot;);<br /> public static readonly HttpStatus Unauthorized = new HttpStatus(401, &quot;Unauthorized&quot;);<br /> public static readonly HttpStatus PaymentRequired = new HttpStatus(402, &quot;Payment Required&quot;);<br /> public static readonly HttpStatus Forbidden = new HttpStatus(403, &quot;Forbidden&quot;);<br /> public static readonly HttpStatus NotFound = new HttpStatus(404, &quot;Not Found&quot;);<br /> public static readonly HttpStatus MethodNotAllowed = new HttpStatus(405, &quot;Method Not Allowed&quot;);<br /> public static readonly HttpStatus NotAcceptable = new HttpStatus(406, &quot;Not Acceptable&quot;);<br /> public static readonly HttpStatus ProxyAuthenticationRequired = new HttpStatus(407, &quot;Proxy Authentication Required&quot;);<br /> public static readonly HttpStatus RequestTimeout = new HttpStatus(408, &quot;Request Timeout&quot;);<br /> public static readonly HttpStatus Conflict = new HttpStatus(409, &quot;Conflict&quot;);<br /> public static readonly HttpStatus Gone = new HttpStatus(410, &quot;Gone&quot;);<br /> public static readonly HttpStatus LengthRequired = new HttpStatus(411, &quot;Length Required&quot;);<br /> public static readonly HttpStatus PreconditionFailed = new HttpStatus(412, &quot;Precondition Failed&quot;);<br /> public static readonly HttpStatus RequestEntityTooLarge = new HttpStatus(413, &quot;Request Entity Too Large&quot;);<br /> public static readonly HttpStatus RequestURITooLong = new HttpStatus(414, &quot;Request-URI Too Long&quot;);<br /> public static readonly HttpStatus UnsupportedMediaType = new HttpStatus(415, &quot;Unsupported Media Type&quot;);<br /> public static readonly HttpStatus RequestedRangeNotSatisfiable = new HttpStatus(416, &quot;Requested Range Not Satisfiable&quot;);<br /> public static readonly HttpStatus ExpectationFailed = new HttpStatus(417, &quot;Expectation Failed&quot;);<br /> public static readonly HttpStatus InternalServerError = new HttpStatus(500, &quot;Internal Server Error&quot;);<br /> public static readonly HttpStatus NotImplemented = new HttpStatus(501, &quot;Not Implemented&quot;);<br /> public static readonly HttpStatus BadGateway = new HttpStatus(502, &quot;Bad Gateway&quot;);<br /> public static readonly HttpStatus ServiceUnavailable = new HttpStatus(503, &quot;Service Unavailable&quot;);<br /> public static readonly HttpStatus GatewayTimeout = new HttpStatus(504, &quot;Gateway Timeout&quot;);<br /> public static readonly HttpStatus HTTPVersionNotSupported = new HttpStatus(505, &quot;HTTP Version Not Supported&quot;);</p> <p>Here is a drop of the original text:</p> <p> part of Hypertext Transfer Protocol -- HTTP/1.1<br /> RFC 2616 Fielding, et al.<br /> 10 Status Code Definitions<br /> <br /> Each Status-Code is described below, including a description of which method(s) it can follow and any metainformation required in the response.<br /> <br /> 10.1 Informational 1xx<br /> <br /> This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. There are no required headers for this class of status code. Since HTTP/1.0 did not define any 1xx status codes, servers MUST NOT send a 1xx response to an HTTP/1.0 client except under experimental conditions.<br /> <br /> A client MUST be prepared to accept one or more 1xx status responses prior to a regular response, even if the client does not expect a 100 (Continue) status message. Unexpected 1xx status responses MAY be ignored by a user agent.<br /> <br /> Proxies MUST forward 1xx responses, unless the connection between the proxy and its client has been closed, or unless the proxy itself requested the generation of the 1xx response. (For example, if a<br /> <br /> proxy adds a &quot;Expect: 100-continue&quot; field when it forwards a request, then it need not forward the corresponding 100 (Continue) response(s).)<br /> <br /> 10.1.1 100 Continue<br /> <br /> The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed. See section 8.2.3 for detailed discussion of the use and handling of this status code.<br /> <br /> 10.1.2 101 Switching Protocols<br /> <br /> The server understands and is willing to comply with the client&#39;s request, via the Upgrade message header field (section 14.42), for a change in the application protocol being used on this connection. The server will switch protocols to those defined by the response&#39;s Upgrade header field immediately after the empty line which terminates the 101 response.<br /> <br /> The protocol SHOULD be switched only when it is advantageous to do so. For example, switching to a newer version of HTTP is advantageous over older versions, and switching to a real-time, synchronous protocol might be advantageous when delivering resources that use such features.<br /> <br /> 10.2 Successful 2xx<br /> <br /> This class of status code indicates that the client&#39;s request was successfully received, understood, and accepted.<br /> <br /> 10.2.1 200 OK<br /> <br /> The request has succeeded. The information returned with the response is dependent on the method used in the request, for example:<br /> <br /> GET an entity corresponding to the requested resource is sent in the response;<br /> <br /> HEAD the entity-header fields corresponding to the requested resource are sent in the response without any message-body;<br /> <br /> POST an entity describing or containing the result of the action;<br /> <br /> TRACE an entity containing the request message as received by the end server.<br /> <br /> 10.2.2 201 Created<br /> <br /> The request has been fulfilled and resulted in a new resource being created. The newly created resource can be referenced by the URI(s) returned in the entity of the response, with the most specific URI for the resource given by a Location header field. The response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. The origin server MUST create the resource before returning the 201 status code. If the action cannot be carried out immediately, the server SHOULD respond with 202 (Accepted) response instead.<br /> <br /> A 201 response MAY contain an ETag response header field indicating the current value of the entity tag for the requested variant just created, see section 14.19.<br /> <br /> 10.2.3 202 Accepted<br /> <br /> The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.<br /> <br /> The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent&#39;s connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request&#39;s current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.<br /> <br /> 10.2.4 203 Non-Authoritative Information<br /> <br /> The returned metainformation in the entity-header is not the definitive set as available from the origin server, but is gathered from a local or a third-party copy. The set presented MAY be a subset or superset of the original version. For example, including local annotation information about the resource might result in a superset of the metainformation known by the origin server. Use of this response code is not required and is only appropriate when the response would otherwise be 200 (OK).<br /> <br /> 10.2.5 204 No Content<br /> <br /> The server has fulfilled the request but does not need to return an entity-body, and might want to return updated metainformation. The response MAY include new or updated metainformation in the form of entity-headers, which if present SHOULD be associated with the requested variant.<br /> <br /> If the client is a user agent, it SHOULD NOT change its document view from that which caused the request to be sent. This response is primarily intended to allow input for actions to take place without causing a change to the user agent&#39;s active document view, although any new or updated metainformation SHOULD be applied to the document currently in the user agent&#39;s active view.<br /> <br /> The 204 response MUST NOT include a message-body, and thus is always terminated by the first empty line after the header fields.<br /> <br /> 10.2.6 205 Reset Content<br /> <br /> The server has fulfilled the request and the user agent SHOULD reset the document view which caused the request to be sent. This response is primarily intended to allow input for actions to take place via user input, followed by a clearing of the form in which the input is given so that the user can easily initiate another input action. The response MUST NOT include an entity.<br /> <br /> 10.2.7 206 Partial Content<br /> <br /> The server has fulfilled the partial GET request for the resource. The request MUST have included a Range header field (section 14.35) indicating the desired range, and MAY have included an If-Range header field (section 14.27) to make the request conditional.<br /> <br /> The response MUST include the following header fields:<br /> <br /> - Either a Content-Range header field (section 14.16) indicating<br /> the range included with this response, or a multipart/byteranges<br /> Content-Type including Content-Range fields for each part. If a<br /> Content-Length header field is present in the response, its<br /> value MUST match the actual number of OCTETs transmitted in the<br /> message-body.<br /> - Date<br /> - ETag and/or Content-Location, if the header would have been sent<br /> in a 200 response to the same request<br /> - Expires, Cache-Control, and/or Vary, if the field-value might<br /> differ from that sent in any previous response for the same<br /> variant<br /> If the 206 response is the result of an If-Range request that used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. If the response is the result of an If-Range request that used a weak validator, the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. Otherwise, the response MUST include all of the entity-headers that would have been returned with a 200 (OK) response to the same request.<br /> <br /> A cache MUST NOT combine a 206 response with other previously cached content if the ETag or Last-Modified headers do not match exactly, see 13.5.4.<br /> <br /> A cache that does not support the Range and Content-Range headers MUST NOT cache 206 (Partial) responses.<br /> <br /> 10.3 Redirection 3xx<br /> <br /> This class of status code indicates that further action needs to be taken by the user agent in order to fulfill the request. The action required MAY be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A client SHOULD detect infinite redirection loops, since such loops generate network traffic for each redirection.<br /> <br /> Note: previous versions of this specification recommended a<br /> maximum of five redirections. Content developers should be aware<br /> that there might be clients that implement such a fixed<br /> limitation.<br /> 10.3.1 300 Multiple Choices<br /> <br /> The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location.<br /> <br /> Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of<br /> <br /> the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection.<br /> <br /> If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise.<br /> <br /> 10.3.2 301 Moved Permanently<br /> <br /> The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise.<br /> <br /> The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).<br /> <br /> If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.<br /> <br /> Note: When automatically redirecting a POST request after<br /> receiving a 301 status code, some existing HTTP/1.0 user agents<br /> will erroneously change it into a GET request.<br /> 10.3.3 302 Found<br /> <br /> The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.<br /> <br /> The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).<br /> <br /> If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.<br /> <br /> Note: RFC 1945 and RFC 2068 specify that the client is not allowed<br /> to change the method on the redirected request. However, most<br /> existing user agent implementations treat 302 as if it were a 303<br /> response, performing a GET on the Location field-value regardless<br /> of the original request method. The status codes 303 and 307 have<br /> been added for servers that wish to make unambiguously clear which<br /> kind of reaction is expected of the client.<br /> 10.3.4 303 See Other<br /> <br /> The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable.<br /> <br /> The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).<br /> <br /> Note: Many pre-HTTP/1.1 user agents do not understand the 303<br /> status. When interoperability with such clients is a concern, the<br /> 302 status code may be used instead, since most user agents react<br /> to a 302 response as described here for 303.<br /> 10.3.5 304 Not Modified<br /> <br /> If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields.<br /> <br /> The response MUST include the following header fields:<br /> <br /> - Date, unless its omission is required by section 14.18.1<br /> If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly.<br /> <br /> - ETag and/or Content-Location, if the header would have been sent<br /> in a 200 response to the same request<br /> - Expires, Cache-Control, and/or Vary, if the field-value might<br /> differ from that sent in any previous response for the same<br /> variant<br /> If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers.<br /> <br /> If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional.<br /> <br /> If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response.<br /> <br /> 10.3.6 305 Use Proxy<br /> <br /> The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers.<br /> <br /> Note: RFC 2068 was not clear that 305 was intended to redirect a<br /> single request, and to be generated by origin servers only. Not<br /> observing these limitations has significant security consequences.<br /> 10.3.7 306 (Unused)<br /> <br /> The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved.<br /> <br /> 10.3.8 307 Temporary Redirect<br /> <br /> The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.<br /> <br /> The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI.<br /> <br /> If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.<br /> <br /> 10.4 Client Error 4xx<br /> <br /> The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents SHOULD display any included entity to the user.<br /> <br /> If the client is sending data, a server implementation using TCP SHOULD be careful to ensure that the client acknowledges receipt of the packet(s) containing the response, before the server closes the input connection. If the client continues sending data to the server after the close, the server&#39;s TCP stack will send a reset packet to the client, which may erase the client&#39;s unacknowledged input buffers before they can be read and interpreted by the HTTP application.<br /> <br /> 10.4.1 400 Bad Request<br /> <br /> The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.<br /> <br /> 10.4.2 401 Unauthorized<br /> <br /> The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information. HTTP access authentication is explained in &quot;HTTP Authentication: Basic and Digest Access Authentication&quot; [43].<br /> <br /> 10.4.3 402 Payment Required<br /> <br /> This code is reserved for future use.<br /> <br /> 10.4.4 403 Forbidden<br /> <br /> The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity. If the server does not wish to make this information available to the client, the status code 404 (Not Found) can be used instead.<br /> <br /> 10.4.5 404 Not Found<br /> <br /> The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable.<br /> <br /> 10.4.6 405 Method Not Allowed<br /> <br /> The method specified in the Request-Line is not allowed for the resource identified by the Request-URI. The response MUST include an Allow header containing a list of valid methods for the requested resource.<br /> <br /> 10.4.7 406 Not Acceptable<br /> <br /> The resource identified by the request is only capable of generating response entities which have content characteristics not acceptable according to the accept headers sent in the request.<br /> <br /> Unless it was a HEAD request, the response SHOULD include an entity containing a list of available entity characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection.<br /> <br /> Note: HTTP/1.1 servers are allowed to return responses which are<br /> not acceptable according to the accept headers sent in the<br /> request. In some cases, this may even be preferable to sending a<br /> 406 response. User agents are encouraged to inspect the headers of<br /> an incoming response to determine if it is acceptable.<br /> If the response could be unacceptable, a user agent SHOULD temporarily stop receipt of more data and query the user for a decision on further actions.<br /> <br /> 10.4.8 407 Proxy Authentication Required<br /> <br /> This code is similar to 401 (Unauthorized), but indicates that the client must first authenticate itself with the proxy. The proxy MUST return a Proxy-Authenticate header field (section 14.33) containing a challenge applicable to the proxy for the requested resource. The client MAY repeat the request with a suitable Proxy-Authorization header field (section 14.34). HTTP access authentication is explained in &quot;HTTP Authentication: Basic and Digest Access Authentication&quot; [43].<br /> <br /> 10.4.9 408 Request Timeout<br /> <br /> The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time.<br /> <br /> 10.4.10 409 Conflict<br /> <br /> The request could not be completed due to a conflict with the current state of the resource. This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. The response body SHOULD include enough<br /> <br /> information for the user to recognize the source of the conflict. Ideally, the response entity would include enough information for the user or user agent to fix the problem; however, that might not be possible and is not required.<br /> <br /> Conflicts are most likely to occur in response to a PUT request. For example, if versioning were being used and the entity being PUT included changes to a resource which conflict with those made by an earlier (third-party) request, the server might use the 409 response to indicate that it can&#39;t complete the request. In this case, the response entity would likely contain a list of the differences between the two versions in a format defined by the response Content-Type.<br /> <br /> 10.4.11 410 Gone<br /> <br /> The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 (Not Found) SHOULD be used instead. This response is cacheable unless indicated otherwise.<br /> <br /> The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server&#39;s site. It is not necessary to mark all permanently unavailable resources as &quot;gone&quot; or to keep the mark for any length of time -- that is left to the discretion of the server owner.<br /> <br /> 10.4.12 411 Length Required<br /> <br /> The server refuses to accept the request without a defined Content- Length. The client MAY repeat the request if it adds a valid Content-Length header field containing the length of the message-body in the request message.<br /> <br /> 10.4.13 412 Precondition Failed<br /> <br /> The precondition given in one or more of the request-header fields evaluated to false when it was tested on the server. This response code allows the client to place preconditions on the current resource metainformation (header field data) and thus prevent the requested method from being applied to a resource other than the one intended.<br /> <br /> 10.4.14 413 Request Entity Too Large<br /> <br /> The server is refusing to process a request because the request entity is larger than the server is willing or able to process. The server MAY close the connection to prevent the client from continuing the request.<br /> <br /> If the condition is temporary, the server SHOULD include a Retry- After header field to indicate that it is temporary and after what time the client MAY try again.<br /> <br /> 10.4.15 414 Request-URI Too Long<br /> <br /> The server is refusing to service the request because the Request-URI is longer than the server is willing to interpret. This rare condition is only likely to occur when a client has improperly converted a POST request to a GET request with long query information, when the client has descended into a URI &quot;black hole&quot; of redirection (e.g., a redirected URI prefix that points to a suffix of itself), or when the server is under attack by a client attempting to exploit security holes present in some servers using fixed-length buffers for reading or manipulating the Request-URI.<br /> <br /> 10.4.16 415 Unsupported Media Type<br /> <br /> The server is refusing to service the request because the entity of the request is in a format not supported by the requested resource for the requested method.<br /> <br /> 10.4.17 416 Requested Range Not Satisfiable<br /> <br /> A server SHOULD return a response with this status code if a request included a Range request-header field (section 14.35), and none of the range-specifier values in this field overlap the current extent of the selected resource, and the request did not include an If-Range request-header field. (For byte-ranges, this means that the first- byte-pos of all of the byte-range-spec values were greater than the current length of the selected resource.)<br /> <br /> When this status code is returned for a byte-range request, the response SHOULD include a Content-Range entity-header field specifying the current length of the selected resource (see section 14.16). This response MUST NOT use the multipart/byteranges content- type.<br /> <br /> 10.4.18 417 Expectation Failed<br /> <br /> The expectation given in an Expect request-header field (see section 14.20) could not be met by this server, or, if the server is a proxy, the server has unambiguous evidence that the request could not be met by the next-hop server.<br /> <br /> 10.5 Server Error 5xx<br /> <br /> Response status codes beginning with the digit &quot;5&quot; indicate cases in which the server is aware that it has erred or is incapable of performing the request. Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. User agents SHOULD display any included entity to the user. These response codes are applicable to any request method.<br /> <br /> 10.5.1 500 Internal Server Error<br /> <br /> The server encountered an unexpected condition which prevented it from fulfilling the request.<br /> <br /> 10.5.2 501 Not Implemented<br /> <br /> The server does not support the functionality required to fulfill the request. This is the appropriate response when the server does not recognize the request method and is not capable of supporting it for any resource.<br /> <br /> 10.5.3 502 Bad Gateway<br /> <br /> The server, while acting as a gateway or proxy, received an invalid response from the upstream server it accessed in attempting to fulfill the request.<br /> <br /> 10.5.4 503 Service Unavailable<br /> <br /> The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response.<br /> <br /> Note: The existence of the 503 status code does not imply that a<br /> server must use it when becoming overloaded. Some servers may wish<br /> to simply refuse the connection.<br /> 10.5.5 504 Gateway Timeout<br /> <br /> The server, while acting as a gateway or proxy, did not receive a timely response from the upstream server specified by the URI (e.g. HTTP, FTP, LDAP) or some other auxiliary server (e.g. DNS) it needed to access in attempting to complete the request.<br /> <br /> Note: Note to implementors: some deployed proxies are known to<br /> return 400 or 500 when DNS lookups time out.<br /> 10.5.6 505 HTTP Version Not Supported<br /> <br /> The server does not support, or refuses to support, the HTTP protocol version that was used in the request message. The server is indicating that it is unable or unwilling to complete the request using the same major version as the client, as described in section 3.1, other than with this error message. The response SHOULD contain an entity describing why that version is not supported and what other protocols are supported by that server.</p> <p>Good luck!</p>Fri, 28 Feb 2014 20:30:58 +01002014-02-28T20:30:58+01:00608http://www.erikojebo.se/Code/Details/608webmaster@erikojebo.seAdding LESS to your ASP.NET Application<p>Most web developers would probably agree that it can be frustrating working with CSS. CSS is all but <a href='http://en.wikipedia.org/wiki/Don&#39;t_repeat_yourself'>DRY</a> since the are no good ways to reuse stuff between classes, etc. This makes CSS somewhat of a maintenance nightmare, but fortunately other tools have appeared in the web development world to mitigate this problem. One such tool is <a href='http://lesscss.org/'>LESS</a>.</p> <p>To make use off LESS in your ASP.NET application the easiest way is to install the <a href='http://www.dotlesscss.org/'>dotLess] [link=http://www.nuget.org/packages/dotless/]nuget package</a>. This adds references to the needed assembly and adds a bunch of stuff to your web.config to add a handler which compiles your LESS into CSS.</p> <p>The Nuget package does all the heavy lifting, but you might have to tell IIS what to do when someone requests a .less file, by adding a MIME type configuration.</p> <p>In the webserver section of you web.config add the following snippet:</p> <pre class='prettyprint'> &lt;staticContent&gt;<br /> &lt;mimeMap fileExtension=&quot;.less&quot; mimeType=&quot;text/css&quot; /&gt;<br /> &lt;/staticContent&gt;</pre> <p>Another thing to remember is that if you are using the built in publishing/deployment features of Visual Studio to deploy your application you need to set the Build Action to Content for all your .less files, or they will not be included in the deployment.</p>Sat, 28 Dec 2013 13:39:38 +01002013-12-28T13:39:38+01:00607http://www.erikojebo.se/Code/Details/607webmaster@erikojebo.seDeploying and Securing ELMAH in an ASP.NET MVC 4 Application<p>Every app you ever build should have some easy way to access a log of recent exceptions. An easy way to accomplish this is to use the <a href='https://code.google.com/p/elmah/'>ELMAH</a> library. ELMAH stands for Error Logging Modules and Handlers. In its simplest form ELMAH takes care of logging unhandled exceptions as well as giving you an easy way of accessing that log, which tends to be a quite good starting point.</p> <p><strong>Installation</strong><br />To add ELMAH to your MVC application you simply install the <a href='http://www.nuget.org/packages/Elmah.MVC/'>Elmah.Mvc nuget package</a>. This adds references to the needed assemblies and adds a bunch of stuff to your web.config.</p> <p>Just by adding that nuget package you now have added logging of unhandled exceptions to your application. To see the error log, and check that everything is working you simply open up your site on localhost and go to the uri /elmah. You should now see a page with a list of the recent log entries (probably empty, since you just added ELMAH).</p> <p>The default configuration is such that the logging is made in-memory, which is good since it minimizes the need for configuration, but bad since you will lose the log when you app pool is recycled. It is also set up to only allow requests from localhost to access the log page, which is a good idea from a security point of view, but might be unpractical.</p> <p><strong>Enabling Remote Access</strong><br />To enable remote access of the ELMAH page you need to add the following to the ELMAH section of you Web.config:</p> <pre class='prettyprint'> &lt;elmah&gt;<br /> &lt;security allowRemoteAccess=&quot;yes&quot; /&gt;<br /> &lt;/elmah&gt;</pre> <p>By doing that you open up the page to be accessible by anyone, which is NOT a good idea, so you need to do some further web.config tweaking to make it secure.</p> <p><strong>Securing the ELMAH page</strong><br />If you start googling how to make the ELMAH page secure you will most likely find advice on how to set up location entries in you web.config to restrict access to elmah.axd. This does not play well with Elmah.Mvc, which uses a slightly differrent approach. Elmah.Mvc adds a bunch of app settings to your web.config which lets you control the security of the page in an easy manner:</p> <pre class='prettyprint'>&lt;appSettings&gt;<br /> &lt;!-- snip --&gt;<br /> &lt;add key=&quot;elmah.mvc.disableHandler&quot; value=&quot;false&quot; /&gt;<br /> &lt;add key=&quot;elmah.mvc.disableHandleErrorFilter&quot; value=&quot;false&quot; /&gt;<br /> &lt;add key=&quot;elmah.mvc.requiresAuthentication&quot; value=&quot;true&quot; /&gt;<br /> &lt;add key=&quot;elmah.mvc.IgnoreDefaultRoute&quot; value=&quot;false&quot; /&gt;<br /> &lt;add key=&quot;elmah.mvc.allowedRoles&quot; value=&quot;*&quot; /&gt;<br /> &lt;add key=&quot;elmah.mvc.allowedUsers&quot; value=&quot;someadminusername&quot; /&gt;<br /> &lt;add key=&quot;elmah.mvc.route&quot; value=&quot;elmah&quot; /&gt;<br /> &lt;/appSettings&gt;<br /></pre> <p>The first step is to set the elmah.mvc.requiresAuthentication value to true, which ensures that the user is authenticated by Forms Authentication when accessing the ELMAH page. But that alone is not enough, so you need to add either allowedUsers or allowedRoles. Those settings tells Elma.Mvc to check that the user accessing the page has one of the listed usernames or roles, depending on what you have specified.</p> <p><strong>Configuring where ELMAH Saves the Log Entries</strong><br />As I mentioned earlier, ELMAH stores the log in-memory by default, but you can configure it to use XML files or a SQL database instead if you want to avoid losing the log every now and then.</p> <p>To enable XML file logging you need to do some further web.config tweaking by adding a setting to the ELMAH section of the web.config:</p> <pre class='prettyprint'> &lt;elmah&gt;<br /> &lt;security allowRemoteAccess=&quot;yes&quot; /&gt;<br /> &lt;errorLog type=&quot;Elmah.XmlFileErrorLog, Elmah&quot; logPath=&quot;~/App_Data&quot; /&gt;<br /> &lt;/elmah&gt;</pre> <p>If you choose this approach you need to make sure that your application has write permissions on the path you specify as logPath.</p> <p>If you would rather have ELMAH save the log to a database you modify the config to look like this instead:</p> <pre class='prettyprint'> &lt;elmah&gt;<br /> &lt;security allowRemoteAccess=&quot;yes&quot; /&gt;<br /> &lt;errorLog type=&quot;Elmah.SqlErrorLog, Elmah&quot; connectionStringName=&quot;YourConnectionStringName&quot; /&gt;<br /> &lt;/elmah&gt;</pre> <p>Since it writes stuff to the database you also have to make some schema changes in the database to which you pointed ELMAH with the connection string you chose above. To find the latest SQL script you go to the <a href='https://code.google.com/p/elmah/wiki/Downloads'>ELMAH downloads page</a>, navigate to the latest release, and scroll down until you find the links called &quot;DDL Script&quot;. There you chose the appropriate database provider and download that script.</p> <p>The script creates a table and three stored procedures. You now have to make sure that the database user you use for the application has access to the table and execute access to the stored procedures. Here is how to add the execute persmissions:</p> <pre class='prettyprint'>GRANT EXECUTE ON [dbo].[ELMAH_GetErrorXml] TO yourdatabaseusername;<br />GRANT EXECUTE ON [dbo].[ELMAH_GetErrorsXml] TO yourdatabaseusername;<br />GRANT EXECUTE ON [dbo].[ELMAH_LogError] TO yourdatabaseusername;</pre> <p>That should be it! To make sure everything works, force an error in some way (I added an action which simply raises an exception), navigate to /elmah on your site and make sure the error appeared in the log.</p>Sat, 28 Dec 2013 13:20:31 +01002013-12-28T13:20:31+01:00606http://www.erikojebo.se/Code/Details/606webmaster@erikojebo.seChecklist for Solving the Directory Listing Denied Error after Deploying an ASP.NET MVC Application<p>More often than not after deploying a new ASP.NET MVC application for the first time I get an error message from IIS telling me that directory listing is not allowed. This error appears since IIS does not now what to do when you do a request for the root of the application and there is no default document, such as index.htm, so it tries to list the contents of the root directory. This, however is usually not permitted by the server, and there is your error.</p> <p>The cause of the error might be one of any number of things. Here is a quick little checklist of things that I have found to be common causes:</p> <p>* Make sure that the server is running the version of the .NET Framework which your application is targeting<br />* Make sure that the application is running in an application pool using the correct version of the .NET Framework and is running in integrated mode<br />* Try to include the &lt;modules runAllManagedModulesForAllRequests=&quot;true&quot; /&gt; element in the WebServer section of your Web.config</p>Sat, 28 Dec 2013 12:44:10 +01002013-12-28T12:44:10+01:00605http://www.erikojebo.se/Code/Details/605webmaster@erikojebo.seQuick Tip: Finding File Names of all Files in a Directory Containing a Given String<p>Here is a quick example of how to use grep to find the file names of all files in a directory tree which contain a given string:</p> <pre class='prettyprint'>grep -rl --include=*.js &quot;string to search for&quot; /some/path</pre> <p>-r makes the search recursive.<br />-l tells grep to list the file names rather than all occurrences in all the files, which is handy if you want to pipe the output to another command which will do something with those files.<br />--include specifies a pattern used to determine if a given file should be searched. If you want to specify multiple patterns you can surround them with curly braces, like so:</p> <pre class='prettyprint'>grep -rl --include={*.js,*.html} &quot;string to search for&quot; /some/path</pre> <p><br /></p>Wed, 25 Dec 2013 12:50:42 +01002013-12-25T12:50:42+01:00592http://www.erikojebo.se/Code/Details/592webmaster@erikojebo.seListing All Constraints in a SQL Server Database using T-SQL<p>Here is a short T-SQL snippet showing how to list all constraints in a database matching a given string:</p> <pre class='prettyprint'>SELECT * FROM sys.objects<br />WHERE type_desc LIKE &#39;%CONSTRAINT&#39; and name like &#39;DF%&#39;</pre>Sun, 01 Dec 2013 20:10:05 +01002013-12-01T20:10:05+01:00591http://www.erikojebo.se/Code/Details/591webmaster@erikojebo.seGetting Started with PowerShell<p>If you are a posix person stuck in a Windows environment or just a command line type of person you need to start using PowerShell.</p> <p><strong>Gettings Started</strong><br />The first step to be able to do any actual scripting is to allow script execution on your machine.</p> <pre class='prettyprint'>Set-ExecutionPolicy RemoteSigned</pre> <p>This allows any local scripts to run, but requires downloaded scripts to be signed.</p> <p><strong>Hello World!</strong><br />Time to try it out. Write the following to helloworld.ps1:</p> <pre class='prettyprint'>Write-Host &quot;Hello PowerShell&quot;</pre> <p>Navigate to the directory where you put the helloworld.ps1 file and run the following command:</p> <pre class='prettyprint'>.\helloworld</pre> <p>You should now see &quot;Hello PowerShell&quot; written to your console.</p> <p><strong>Console Customization</strong><br />If you are a tinkerer you probably want to do some customization of the console as the next step.</p> <p>PowerShell has the notion of a user profile. Your profile is a file where you can set up your environment according to your personal preferences, just like a .bashrc file under Linux.</p> <p>To find out where PowerShell is looking to find you profile, type the following in your console:</p> <pre class='prettyprint'>$profile</pre> <p>If you quickly want to create a profile you can use the $profile variable as the argument to your favourite editor, like so:</p> <pre class='prettyprint'>notepad $profile</pre> <p>Here is an example of a profile which customizes the colors used:</p> <pre class='prettyprint'>$settings = (Get-Host).PrivateData <p>$settings.ErrorBackgroundColor = &quot;Red&quot;<br />$settings.ErrorForegroundColor = &quot;White&quot;<br />$settings.WarningBackgroundColor = &quot;Red&quot;<br />$settings.WarningForegroundColor = &quot;White&quot;</p> <p>$ui = (Get-Host).UI.RawUI<br />$ui.BackgroundColor = &quot;White&quot;<br />$ui.ForegroundColor = &quot;Black&quot;</p> <p># Make sure to clear the screen so that the entire console is redrawn with<br /># the new background color<br />Clear-Host</pre></p> <p>If you want to reload your profile without restarting the console you can run the following command</p> <pre class='prettyprint'>. $profile</pre>Sat, 30 Nov 2013 22:18:02 +01002013-11-30T22:18:02+01:00579http://www.erikojebo.se/Code/Details/579webmaster@erikojebo.seFixing the Font Awesome WOFF 404 Error under ASP.NET MVC<p>Lately I&#39;ve been doing a few ASP:NET MVC apps that make use of the very useful <a href='http://fortawesome.github.io/Font-Awesome/'>Font Awesome</a> font for icons. However, each time I&#39;ve gotten an anoying 404 error for the woff font file:</p> <p><img src='/images/upload/code_blog/font-awesome-woff-404.png' /></p> <p>The solution for this is to add the following segment to the WebServer section of your web.config:</p> <pre class='prettyprint'>&lt;staticContent&gt;<br /> &lt;mimeMap fileExtension=&quot;.woff&quot; mimeType=&quot;application/x-font-woff&quot; /&gt;<br />&lt;/staticContent&gt;<br /></pre> <p>This configures IIS to understand that there is a woff mime type that it should care about, which magically makes the 404 go away.</p> <p>If you get a 500 Internal Server Error due to there already being a mime map for .woff you can add a remove tab before adding the new mimeMap, like so:</p> <pre class='prettyprint'>&lt;staticContent&gt;<br /> &lt;remove fileExtension=&quot;.woff&quot;/&gt;<br /> &lt;mimeMap fileExtension=&quot;.woff&quot; mimeType=&quot;application/x-font-woff&quot; /&gt;<br />&lt;/staticContent&gt;<br /></pre>Tue, 22 Oct 2013 20:51:46 +02002013-10-22T20:51:46+02:00522http://www.erikojebo.se/Code/Details/522webmaster@erikojebo.seQuick Tip: IIS Default Log Path<p>The IIS log file can be a quite useful tool when troubleshooting stuff that do not generate entries in the event log. Here is the default path to the IIS log file:</p> <pre class='prettyprint'>%systemroot%\system32\logfiles\HTTPERR</pre>Thu, 23 May 2013 14:53:53 +02002013-05-23T14:53:53+02:00483http://www.erikojebo.se/Code/Details/483webmaster@erikojebo.seFiltering Lines in a Text File using Emacs<p>Quite often when working with large text files, such as log files you want to remove or keep all lines which match a given pattern. If you are working on the command line you would probably use grep to accomplish this. However, if you are editing the file in Emacs you can use the commands <em>flush-lines</em> and <em>keep-lines</em>. Both these commands take a parameter which is the regex to use when matching the lines in the file.</p> <p>Let&#39;s say that you have a text file that looks something like this:</p> <pre class='prettyprint'>INSERT INTO [User] ...<br />INSERT INTO [Blog] ...<br />INSERT INTO [Comment] ...<br />INSERT INTO [Comment] ...<br />INSERT INTO [User] ...<br />INSERT INTO [User] ...</pre> <p>If you are only interested in the lines which insert data into the Comment table you can use <em>keep-lines</em></p> <pre class='prettyprint'>M-x keep-lines&lt;enter&gt;<br />Comment</pre> <p>The buffer would then contain the following text:</p> <pre class='prettyprint'>INSERT INTO [Comment] ...<br />INSERT INTO [Comment] ...</pre> <p>If you instead wanted to exclude those same lines you could use <em>flush-lines</em>, in just the same way.</p> <p>It&#39;s worth mentioning that these commands accept full blown regexes as argument, and not just simple strings.</p>Wed, 03 Apr 2013 13:23:45 +02002013-04-03T13:23:45+02:00482http://www.erikojebo.se/Code/Details/482webmaster@erikojebo.seCreating a New User through T-SQL<p>Logins/users in SQL Server is one of those things I always tend to mess up. It usually starts with me trying to create a user and getting an error message that the login is invalid or that I don&#39;t have permissions. Then I remember that there was something confusing about the whole login/user thing and it goes on from there.</p> <p>So, for the sake of my own mental health, here is a little T-SQL script that creates a login and a user:</p> <pre class='prettyprint'>use SomeDatabase <p>-- Make sure the password meets the password complexity requirements<br />CREATE LOGIN [someuser] WITH PASSWORD=N&#39;foobar.1&#39;<br />CREATE USER [someuser] FOR LOGIN [someuser]</pre></p> <p>When that is done, make sure that SQL Server authentication is enabled (Management Studio: Right click server node in the object explorer select the Security tab). And finally assign the proper roles for the new user to the databases it should have access to.</p> <p>To assign roles to the new user you can use the following T-SQL statements:</p> <pre class='prettyprint'><br />EXEC sp_addrolemember N&#39;db_datareader&#39;, N&#39;someuser&#39;<br />EXEC sp_addrolemember N&#39;db_datawriter&#39;, N&#39;someuser&#39;<br /></pre>Sat, 30 Mar 2013 21:45:37 +01002013-03-30T21:45:37+01:00481http://www.erikojebo.se/Code/Details/481webmaster@erikojebo.seRunning Scripts against a SQL Server Compact Edition 4.0 Database<p><a href='http://sqlcetoolbox.codeplex.com/releases'>SQL CE Toolbox</a> is a project that includes a simple standalone application for managing SQL Server Compact Edition databases. It lets you run/generate scripts, which can be quite useful.</p> <p>The GUI leaves a bit to wish for, but it works.</p>Mon, 25 Mar 2013 21:14:52 +01002013-03-25T21:14:52+01:00480http://www.erikojebo.se/Code/Details/480webmaster@erikojebo.seClearing the HTML5 AppCache in Google Chrome<p>When you are developing an HTML5 offline appplication you can get into some problems if the cache manifest gets cached in the browser cache. If you are using Chrome you can use <a href='chrome://appcache-internals/'>chrome://appcache-internals/</a> to get rid of stuff in the AppCache that should not be there. </p>Tue, 19 Mar 2013 16:21:28 +01002013-03-19T16:21:28+01:00478http://www.erikojebo.se/Code/Details/478webmaster@erikojebo.seQuick Tip: Subtracting All Lines in a File from another File<p>If you want to subtract all the lines in one file from another file you can do this using grep.</p> <pre class='prettyprint'>grep -Fxvf some_file_to_subtract some_file</pre> <p>See <a href='http://blog.codevariety.com/2012/03/07/shell-subtract-lines-of-one-file-from-another-file/'>this blog post</a> for further details.</p>Sat, 23 Feb 2013 15:37:08 +01002013-02-23T15:37:08+01:00475http://www.erikojebo.se/Code/Details/475webmaster@erikojebo.seQuick Tip: Querying All Columns in a SQL Server Database<p>Here is a short T-SQL script for finding all columns in a database matching a given search string:</p> <pre class='prettyprint'>SELECT t.name AS table_name,<br />SCHEMA_NAME(schema_id) AS schema_name,<br />c.name AS column_name<br />FROM sys.tables AS t<br />INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID<br />WHERE c.name LIKE &#39;%Supervisor%&#39;<br />ORDER BY schema_name, table_name;</pre>Mon, 11 Feb 2013 20:15:09 +01002013-02-11T20:15:09+01:00474http://www.erikojebo.se/Code/Details/474webmaster@erikojebo.seLinux Kata #2: Ordering Lunch<p>This is a kata with a couple of different difficulty levels.</p> <p>Level 1:</p> <p>Count how many attendees what chicken for lunch and how many want tofu. It&#39;s OK to run two separate commands to get the counts. </p> <p>Here is the file contents of the file <em>food.txt</em>. It&#39;s worth noting that the file contains a mix of spaces and tabs as column separators.</p> <pre class='prettyprint'>Name Attendance Response Food<br />Allen Key Meeting Organizer None Chicken<br />Bob Sled Required Attendee Accepted Tofu<br />Clay Pigeon Required Attendee Accepted Chicken<br />Cliff Edge Required Attendee Accepted Tofu<br />Guy Ropes Required Attendee Accepted Tofu<br />Jack Hammer Required Attendee Accepted Chicken<br />Jerry Cann Required Attendee Declined<br />Jim Boot Required Attendee Accepted Chicken<br />Jim Equipment Required Attendee Accepted Chicken<br />Jock Strap Required Attendee Accepted Tofu<br />Lou Paper Required Attendee Accepted Chicken<br />Mike Stand Required Attendee Declined<br />Morris Minor Required Attendee Accepted Chicken<br />Phillip Screwdriver Required Attendee Accepted Chicken<br />Ray Gunn Required Attendee Accepted Tofu<br />Roman Bath Required Attendee Accepted Tofu<br />Stanley Knife Required Attendee Accepted Chicken<br />Terry Towelling Required Attendee Accepted Tofu<br />Walter Closet Required Attendee Accepted Chicken<br />Catherine Wheel Required Attendee Declined Chicken<br />Joy Stick Required Attendee Accepted Tofu<br />Kitty Litter Required Attendee Accepted Tofu<br />Pearl Necklace Required Attendee Accepted Tofu<br />Penny Farthing Required Attendee Declined Chicken<br />Jim Nazium Required Attendee Declined</pre> <p>Level 2:</p> <p>Same task as level 1, but the result from your command/script should be a single line with the format &quot;Tofu: 10, Chicken: 12&quot;</p> <p><br />Level 3:</p> <p>Same task as level 2, but the lunch options may consist of multiple words.</p> <p>Here&#39;s the file contents of the file <em>food2.txt</em>. There&#39;s a mix of spaces and tabs in this file as well.</p> <pre class='prettyprint'>Name Attendance Response Food<br />Allen Key Meeting Organizer None Chicken<br />Bob Sled Required Attendee Accepted Fried tofu<br />Clay Pigeon Required Attendee Accepted Chicken<br />Cliff Edge Required Attendee Accepted Fried tofu<br />Guy Ropes Required Attendee Accepted Fried tofu<br />Jack Hammer Required Attendee Accepted Chicken<br />Jerry Cann Required Attendee Declined<br />Jim Boot Required Attendee Accepted Chicken<br />Jim Equipment Required Attendee Accepted Chicken<br />Jock Strap Required Attendee Accepted Fried tofu<br />Lou Paper Required Attendee Accepted Chicken<br />Mike Stand Required Attendee Declined<br />Morris Minor Required Attendee Accepted Chicken<br />Phillip Screwdriver Required Attendee Accepted Chicken<br />Ray Gunn Required Attendee Accepted Fried tofu<br />Roman Bath Required Attendee Accepted Fried tofu<br />Stanley Knife Required Attendee Accepted Chicken<br />Terry Towelling Required Attendee Accepted Fried tofu<br />Walter Closet Required Attendee Accepted Chicken<br />Catherine Wheel Required Attendee Declined Chicken<br />Joy Stick Required Attendee Accepted Fried tofu<br />Kitty Litter Required Attendee Accepted Fried tofu<br />Pearl Necklace Required Attendee Accepted Fried tofu<br />Penny Farthing Required Attendee Declined Chicken<br />Jim Nazium Required Attendee Declined</pre> <p>The output should still be &quot;Tofu: 10, Chicken: 12&quot;.</p> <p><br />Suggested solution for level 1:</p> <pre class='prettyprint'>cat food.txt | grep Tofu | wc -l;cat food.txt | grep Chicken | wc -l</pre> <p>These commands basicly just lists the entire file contents, filters out all lines containing the word Tofu/Chicken and then counts the number of filtered lines.</p> <p><br />Suggested solution for level 2:</p> <pre class='prettyprint'>awk &#39;{if ($NF == &quot;Tofu&quot;) tofu_count += 1; else if ($NF == &quot;Chicken&quot;) chicken_count += 1 } END {print &quot;Tofu: &quot;,tofu_count,&quot;, Chicken: &quot;,chicken_count}&#39; food.txt</pre> <p>This AWK program matches has no filtering condition, so it matches all lines. For each line which has Tofu/Chicken as the last word it increments the corresponding variable. When all lines have been processed it outputs the values in the variables and some additional text. NF is a magical variable containing the number of fields in a record. $ is used to extract a field with a given 1-based index. So $NF evaluates to the last word on a given line.</p> <p><br />Suggested solution for level 3:</p> <pre class='prettyprint'>awk &#39;BEGIN {FS = &quot; {2,}|\t+&quot;}{if ($NF == &quot;Fried tofu&quot;) tofu_count += 1; else if ($NF == &quot;Chicken&quot;) chicken_count += 1} END {print &quot;Tofu: &quot;,tofu_count,&quot;, Chicken: &quot;,chicken_count}&#39; food2.txt<br />Tofu: 10 , Chicken: 12</pre> <p>Most of the script is the same as in the solution for the level 2 problem. However, in this script the field separator has been customized so that words separated by a single space are considered to be part of the same field. The FS variable contains the field separator expression, which in this case is a regular expression which matches 2 or more spaces or 1 or more tabs.</p>Mon, 17 Dec 2012 21:06:30 +01002012-12-17T21:06:30+01:00473http://www.erikojebo.se/Code/Details/473webmaster@erikojebo.seQuick Tip: Opening Multiple Files with Single Command in Emacs<p>If you quickly want to open multiple files from a single directory you can use wildcards in the find file command, for example:</p> <pre class='prettyprint'>C-x-f</pre> <p><br /><pre class='prettyprint'>Find file: ~/path/to/directory/foo.*</pre></p> <p><br /></p>Mon, 03 Dec 2012 13:20:38 +01002012-12-03T13:20:38+01:00471http://www.erikojebo.se/Code/Details/471webmaster@erikojebo.seEmacs Kata #2: Making Sense of an Outlook Meeting Attendees List<p>One of my pet peeves with Microsoft Outlook is that there is no decent way to sort/filter/print/output/watever for the attendee list of a meeting. Because of this I always end up doing those things in emacs instead.</p> <p>Todays task is to take this attendee list:</p> <pre class='prettyprint'><br />Name Attendance Response<br />Allen Key Meeting Organizer None<br />Bob Sled Required Attendee Accepted<br />Clay Pigeon Required Attendee Accepted<br />Cliff Edge Required Attendee Accepted<br />Guy Ropes Required Attendee Accepted<br />Jack Hammer Required Attendee Accepted<br />Jerry Cann Required Attendee Declined<br />Jim Boot Required Attendee Accepted<br />Jim Equipment Required Attendee Accepted<br />Jock Strap Required Attendee Accepted<br />Lou Paper Required Attendee Accepted<br />Mike Stand Required Attendee Declined<br />Morris Minor Required Attendee Accepted<br />Phillip Screwdriver Required Attendee Accepted<br />Ray Gunn Required Attendee Accepted<br />Roman Bath Required Attendee Accepted<br />Stanley Knife Required Attendee Accepted<br />Terry Towelling Required Attendee Accepted<br />Walter Closet Required Attendee Accepted<br />Catherine Wheel Required Attendee Declined<br />Joy Stick Required Attendee Accepted<br />Kitty Litter Required Attendee Accepted<br />Pearl Necklace Required Attendee Accepted<br />Penny Farthing Required Attendee Declined<br />Jim Nazium Required Attendee Declined<br />Jack Pott Required Attendee Accepted<br />Noah Zark Required Attendee Declined<br />Cain Basket Required Attendee Accepted<br />Barb Dwyer Required Attendee Accepted<br />Helmut Strap Required Attendee Accepted<br />Jim Shorts Required Attendee None<br />Peg Basket Required Attendee Accepted<br />Col Pitt Required Attendee Accepted<br />Cec Pitt Required Attendee None<br />Mary Goround Required Attendee Accepted<br />Annette Curtain Required Attendee None<br />Brandon Iron Required Attendee None<br />Mike Rowe-chip Required Attendee Accepted<br />Lucy Lastic Required Attendee Accepted<br /></pre> <p>and transform it into this:</p> <pre class='prettyprint'><br />Name Response<br />Allen Key None<br />Bob Sled Accepted<br />Clay Pigeon Accepted<br />Cliff Edge Accepted<br />Guy Ropes Accepted<br />Jack Hammer Accepted<br />Jim Boot Accepted<br />Jim Equipment Accepted<br />Jock Strap Accepted<br />Lou Paper Accepted<br />Morris Minor Accepted<br />Phillip Screwdriver Accepted<br />Ray Gunn Accepted<br />Roman Bath Accepted<br />Stanley Knife Accepted<br />Terry Towelling Accepted<br />Walter Closet Accepted<br />Joy Stick Accepted<br />Kitty Litter Accepted<br />Pearl Necklace Accepted<br />Jack Pott Accepted<br />Cain Basket Accepted<br />Barb Dwyer Accepted<br />Helmut Strap Accepted<br />Peg Basket Accepted<br />Col Pitt Accepted<br />Mary Goround Accepted<br />Mike Rowe-chip Accepted<br />Lucy Lastic Accepted<br /> <br />Jim Shorts None<br />Cec Pitt None<br />Annette Curtain None<br />Brandon Iron None<br /></pre> <p>So, in other words, the task is to remove all the attendees that declined, remove the Attendance column, and group the attendees with &quot;None&quot; as response (except for the meeting organizer) at the bottom.</p> <p><strong>My solution:</strong><br />- Align everything into proper columns with the <a href='http://emacswiki.org/emacs/AlignCommands#toc7'>align-repeat</a> command using tab as separator (use &quot;C-q i&quot; to enter a tab in the regex)<br />- Delete all lines with Declined as response using the flush lines command specifying the regex &quot;.*Declined$&quot;<br />- Sort the attendees with Accepted as response, so that the None responses end up at the bottom. This is done using the sort-fields command with -1 as argument (C-u -1 followed by M-x sort-fields). -1 tells emacs to sort using the right most column.<br />- Delete the attendance column using rectangle selection, C-x r d to delete the selected rectangle<br />- Add a blank line before the group of attendees with None as response</p>Sun, 02 Dec 2012 14:06:14 +01002012-12-02T14:06:14+01:00452http://www.erikojebo.se/Code/Details/452webmaster@erikojebo.seLinux Kata #1: Welcome Messages<p>Time for another kata! The task of the day:</p> <p>Given the following file (names.txt):</p> <p><br /><pre class='prettyprint'><br />Bernie<br />Steve<br />Lisa<br />Jamie<br />Adam<br />Little Johnny</pre></p> <p>Create the following file structure:</p> <pre class='prettyprint'><br />.<br />|-- names.txt<br />|-- Adam<br />| `-- Adam.txt<br />|-- Steve<br />| `-- Steve.txt<br />|-- Little Johnny<br />| `-- Little Johnny.txt<br />|-- Lisa<br />| `-- Lisa.txt<br />|-- Jamie<br />| `-- Jamie.txt<br />`-- Bernie<br /> `-- Bernie.txt<br /></pre> <p>where each of the txt files contains a welcome message on the following format:</p> <pre class='prettyprint'>Welcome &lt;NAME&gt;, this is your directory! <p>#i.e.<br />Welcome Adam, this is your directory!</pre></p> <p><strong>Tips:</strong><br />If you don&#39;t get it right on the first try, it might come in handy to quickly be able to delete all subdirectories and their content. Here is one way to do that:</p> <pre class='prettyprint'>find . -mindepth 1 -type d -print0 | xargs -0 rm -rf</pre> <p>This command uses find to list all directories (-type d) one level down from the current directory. This excludes the &quot;.&quot; directory. &quot;print0&quot; is used to handle directory names containing whitespace. This needs to be coupled with the -0 option for xargs. xargs is used to execute rm for the output from find. -rf executes a recursive, forced delete of the directories listed by find.</p> <p>Another useful thing might be to use the -t option for xargs (if your soulution uses xargs) to make it output commands before executing them.</p> <p><strong>Example solution:</strong><br />Here is one way to solve the problem above:</p> <p><br /><pre class='prettyprint'>cat names.txt | xargs -I {} sh -c &#39;echo &quot;Hello $1, this is your directory.&quot; &gt; &quot;./$1/$1.txt&quot;&#39; -- {}</pre></p> <p>This command pipes all names from names.txt into xargs, which executes sh for each name in the file. The -I {} option specifies that a given command should be executed for each line from the file, using {} as a placeholder for that line in the command text.</p> <p>sh -c executes a command string and the -- is used to give sh a parameter to use when executing the command in the command string. $1 in the command string is replaced with the parameter, that is with the value of {}, which is actually the line from the file.</p> <p>The actual command text executed by sh is nothing fancy, simply create a directory with the same name as the name in the file, and then echo the welcome message into a file with the same name inside the newly created directory.</p> <p>The reason sh is used in this command is that you can&#39;t really execute an xargs command that includes the &gt; operator. If you try to you will simply send the whole command, (find + xargs etc) to the file instead.</p>Tue, 27 Nov 2012 20:45:58 +01002012-11-27T20:45:58+01:00451http://www.erikojebo.se/Code/Details/451webmaster@erikojebo.seQuick Tip: Listing the First/Last N Lines of a File under Linux<p>As for everything else, there are of course commands for displaying the first/last N lines of a text file in Linux. These commands are called <em>head</em> and <em>tail</em>.</p> <p>Tail is super useful when you are dealing with long log files, and you&#39;re just interested in what&#39;s happened recently.</p> <p>To get the first/last 25 lines of a given file you can execute the following commands:</p> <p><br /><pre class='prettyprint'><br />head -25 my_file.txt<br />tail -25 my_file.txt<br /></pre></p> <p>or you can pipe stuff to them:</p> <p><br /><pre class='prettyprint'><br />cat my_file.txt | head -25<br />cat my_file.txt | tail -25<br /></pre></p>Sat, 24 Nov 2012 22:53:07 +01002012-11-24T22:53:07+01:00450http://www.erikojebo.se/Code/Details/450webmaster@erikojebo.seQuick Tip: Listing Installed Packages inUbuntu<p>Here is how show a list of all installed packages which contain the word &#39;postgres&#39;:</p> <pre class='prettyprint'>dpkg --get-selections | grep postgres</pre>Fri, 16 Nov 2012 08:29:48 +01002012-11-16T08:29:48+01:00449http://www.erikojebo.se/Code/Details/449webmaster@erikojebo.seGetting Started with Postgres on Ubuntu<p>I just tried to get postgres working under Ubuntu 12.10 and had to google a few things, so here is a quick note of how to get it running:</p> <p>Install postgres:</p> <p><br /><pre class='prettyprint'>apt-get install postgresql-9.1</pre></p> <p>Change password for the postgres user:</p> <p><br /><pre class='prettyprint'>sudo passwd postgres</pre></p> <p>Open the file /etc/postgresql/9.1/main/pg_hba.conf and change the line:</p> <p><br /><pre class='prettyprint'> local all all peer<br /></pre></p> <p><br />to </p> <p><br /><pre class='prettyprint'> local all all md5<br /></pre></p> <p>Create a database user (-P for password prompt):</p> <p><br /><pre class='prettyprint'>sudo -u postgres createuser -P someusername</pre></p> <p>Create a database (-O specifies the owner):</p> <p><br /><pre class='prettyprint'>sudo -u postgres createdb -O somedatabasename</pre></p> <p>Restart postgres:</p> <p><br /><pre class='prettyprint'>sudo service postgresql restart</pre></p> <p>Start setting up the new database with the new user:</p> <p><br /><pre class='prettyprint'>psql -U someusername -d somedatabasename</pre></p> <p>When in the psql terminal use \? to list all available commands and \q to exit.</p>Fri, 16 Nov 2012 08:28:03 +01002012-11-16T08:28:03+01:00448http://www.erikojebo.se/Code/Details/448webmaster@erikojebo.seEmacs Excercies Revisited: Bulleted List to Numbered List<p>In a <a href='http://erikojebo.se/Code/Details/434'>previous post</a> I presented the following problem and solved it using a regex with an embedded elisp expression.</p> <pre class='prettyprint'>* Foo<br />* Foo<br />* Foo<br />* Foo<br />* Foo<br />* Foo<br />* Foo<br />* Foo</pre> <p>That list should be transformed into a numbered (one based) list, like so:</p> <pre class='prettyprint'>1. Foo<br />2. Foo<br />3. Foo<br />4. Foo<br />5. Foo<br />6. Foo<br />7. Foo<br />8. Foo</pre> <p>It&#39;s time to revisit that problem, and present an alternative solution.</p> <p>For this simple problem a regex is a bit of an overkill. This could easily be solved with a keyboard macro.</p> <p><strong>Solution using a keyboard macro:</strong></p> <p>- Set the macro counter to 1 (C-x C-k C-c 1)<br />- Start recording a macro (F3)<br />- Insert the macro counter followed by &quot;. &quot; at the beginning of ther first line (F3)<br />- Move to the next line and finish the macro (F4)<br />- Run the macro to the end of the file (C-u 0 C-x e)</p>Tue, 23 Oct 2012 14:47:20 +02002012-10-23T14:47:20+02:00447http://www.erikojebo.se/Code/Details/447webmaster@erikojebo.seQuick Tip: Grep Equivalent for Windows<p>To find matching text in a file, when you are working in a Windows command prompt, you can use the <em>findstr</em> command:</p> <pre class='prettyprint'>findstr &quot;search string&quot; c:\some\path\foo.txt</pre> <p>If you want to send the output to a file you can use the &#39;&gt;&#39; operator.</p> <pre class='prettyprint'>findstr &quot;search string&quot; c:\some\path\foo.txt &gt; output.txt</pre>Tue, 23 Oct 2012 10:28:57 +02002012-10-23T10:28:57+02:00446http://www.erikojebo.se/Code/Details/446webmaster@erikojebo.seEmacs Kata #1: Name List<p>Working efficiently with a good text editor is something that takes deliberate practice. My editor of choice is Emacs, and to improve my emacs skills I found myself wanting a few simple katas which let you practice a bunch of every day text editing scenarios.</p> <p>So, here is the first in a series of text editor katas. The challenge is simply to transform the first block of text into the second block.</p> <pre class='prettyprint'><br />Allen Key Bob Sled Clay Pigeon Cliff Edge<br />Guy Ropes Jack Hammer Jerry Cann Jim Boot<br />Jim Equipment Jock Strap Lou Paper Mike Stand<br />Morris Minor Phillip Screwdriver Ray Gunn Roman Bath<br />Stanley Knife Terry Towelling Walter Closet Catherine Wheel<br />Joy Stick Kitty Litter Pearl Necklace Penny Farthing<br />Jim Nazium Jack Pott Noah Zark Cain Basket<br />Barb Dwyer Helmut Strap Jim Shorts Peg Basket<br />Col Pitt Cec Pitt Mary Goround Annette Curtain<br />Brandon Iron Mike Rowe-chip Lucy Lastic <br /></pre> <pre class='prettyprint'>1. Basket, Cain<br />2. Basket, Peg<br />3. Bath, Roman<br />4. Boot, Jim<br />5. Cann, Jerry<br />6. Closet, Walter<br />7. Curtain, Annette<br />8. Dwyer, Barb<br />9. Edge, Cliff<br />10. Equipment, Jim<br />11. Farthing, Penny<br />12. Goround, Mary<br />13. Gunn, Ray<br />14. Hammer, Jack<br />15. Iron, Brandon<br />16. Key, Allen<br />17. Knife, Stanley<br />18. Lastic, Lucy<br />19. Litter, Kitty<br />20. Minor, Morris<br />21. Nazium, Jim<br />22. Necklace, Pearl<br />23. Paper, Lou<br />24. Pigeon, Clay<br />25. Pitt, Cec<br />26. Pitt, Col<br />27. Pott, Jack<br />28. Ropes, Guy<br />29. Rowe-chip, Mike<br />30. Screwdriver, Phillip<br />31. Shorts, Jim<br />32. Sled, Bob<br />33. Stand, Mike<br />34. Stick, Joy<br />35. Strap, Helmut<br />36. Strap, Jock<br />37. Towelling, Terry<br />38. Wheel, Catherine<br />39. Zark, Noah<br /></pre> <p>My personal solution to this kata is something along these lines:<br />- Replace tab with newline (using C-q C-i and C-q C-j to input tab and newline in emacs)<br />- Define a keyboard macro that cuts the last name, puts it first and inserts the comma and space.<br />- Repeat the macro until the end of the file using C-u 0 F4<br />- Sort the lines using M-x sort-lines<br />- Define a keyboard macro that inserts the row number using F3 to insert a macro counter when defining the macro. Since the counter is zero based and the list is one based i start the macro at the row above the first name.<br />- Run the macro until the end of the file<br />- Align the names using M-x align-repeat (which can be found <a href='http://emacswiki.org/emacs/AlignCommands#toc7'>here</a>)<br /></p>Fri, 19 Oct 2012 20:28:56 +02002012-10-19T20:28:56+02:00445http://www.erikojebo.se/Code/Details/445webmaster@erikojebo.seQuick Tip: Symbolic Links in Linux and Windows<p>Symbolic links are extremely useful for making long and annoying paths more accessible. Simply put you can make the target directory available through an alternative path.</p> <p>On a Windows machine mklink is used to create symbolic links. Mklink is called as follows:</p> <p><br /><pre class='prettyprint'>mklink new_path existing_path</pre></p> <p>and if you want to create a link to a directory you can add the /D flag. For example:</p> <pre class='prettyprint'>mklink /D c:\code c:\cygwin\home\erik\code</pre> <p>To remove the link you can delete the linked directory or file. DO NOT remove all the files in the linked folder, since that will actually remove them in the target folder as well.</p> <p>If you are on a linux machine you use the ln and unlink commands instead:</p> <pre class='prettyprint'>ln -s /some/complicated/path/ /home/erik/stuff</pre> <p>To remove the link you use unlink:</p> <pre class='prettyprint'>unlink /some/linked/path</pre>Fri, 19 Oct 2012 20:00:08 +02002012-10-19T20:00:08+02:00444http://www.erikojebo.se/Code/Details/444webmaster@erikojebo.seLinux Tip: Copy File Found by Find<p>Here is one way to find a file and copy it in one command:</p> <pre class='prettyprint'>cp `find /directory/to/search/ -name SomeFilename.txt` /destination/path</pre>Tue, 16 Oct 2012 09:27:11 +02002012-10-16T09:27:11+02:00437http://www.erikojebo.se/Code/Details/437webmaster@erikojebo.seOpening Report Data Window in the RDLC Editor<p>I really whish that I did not need to know this, but to avoid future pain here is a quick reminder to self:</p> <p>If you for some reason close the Report Data window when editing a RDLC report you can get it back by using the shortcut Ctrl+Alt+d.</p>Thu, 04 Oct 2012 12:54:45 +02002012-10-04T12:54:45+02:00436http://www.erikojebo.se/Code/Details/436webmaster@erikojebo.seEmacs Regex Writing Made Simpler<p>The big problem with regexes is debugging them, in the rare case when you don&#39;t nail them on the first go.</p> <p>Fortunately Emacs includes a little tool that helps with this: regexbuilder. You can run regexbuilder with the following command:</p> <pre class='prettyprint'>M-x re-builder</pre> <p>This little handy thing creates highlights all matches to the regex, that you are currently writing, in the other visible buffers. Extremely useful!</p>Mon, 24 Sep 2012 22:14:48 +02002012-09-24T22:14:48+02:00435http://www.erikojebo.se/Code/Details/435webmaster@erikojebo.seEmacs resources<p>Here is a little collection of useful links to emacs resources. I&#39;ll continue to update this list as I find more links to remember.</p> <p><strong>Videos</strong><br />* <a href='http://emacsrocks.com/'>Emacs rocks!</a><br />* <a href='http://www.youtube.com/user/rpdillon'>Hack Emacs</a><br />* <a href='http://vimeo.com/timvisher/videos/page:1/sort:newest'>VimGolf in Emacs</a></p> <p><strong>Blog posts</strong></p> <p><strong>Manual entries</strong><br />* <a href='http://www.gnu.org/software/emacs/manual/html_node/emacs/Query-Replace.html'>Query replace</a><br />* <a href='http://www.gnu.org/software/emacs/manual/html_node/emacs/Regexps.html'>Regexps</a></p> <p><strong>Fun stuff</strong><br />* <a href='http://vimgolf.com'>VimGolf.com</a></p> <p><strong>Emacs golf exercises</strong><br />* <a href='http://xahlee.blogspot.se/2011/10/emacs-golf-align-and-sort.html'>Align and sort</a></p>Thu, 20 Sep 2012 21:06:33 +02002012-09-20T21:06:33+02:00434http://www.erikojebo.se/Code/Details/434webmaster@erikojebo.seEmacs replace-regexp fu #2: Bulleted list to numbered list<p>Time for some emacs regex love again! This time the scenario is as follows:</p> <p>You have a simple flat list of stuff in a typical markdown bullet list style.</p> <pre class='prettyprint'>* Foo<br />* Foo<br />* Foo<br />* Foo<br />* Foo<br />* Foo<br />* Foo<br />* Foo</pre> <p>That list should be transformed into a numbered (one based) list, like so:</p> <pre class='prettyprint'>1. Foo<br />2. Foo<br />3. Foo<br />4. Foo<br />5. Foo<br />6. Foo<br />7. Foo<br />8. Foo</pre> <p>As you can probably guess, it&#39;s that happy regex time again! Since emacs has the handy \# thingy that gives you the match index, this should go smoothly. So let&#39;s give it a go:</p> <pre class='prettyprint'>M-x replace-regex</pre> <p>and then we use the following patterns: </p> <p>Match: <em>^\* \(.+\)$</em><br />Replace: <em>\#. \1</em></p> <p>This gives us the following result:</p> <pre class='prettyprint'>0. Foo<br />1. Foo<br />2. Foo<br />3. Foo<br />4. Foo<br />5. Foo<br />6. Foo<br />7. Foo<br /></pre> <p>Almost there, but the match index is zero based and we wanted a one based list, so there is still some work to do. So, now we need to do some math in our regex, and hence it is lisp time!</p> <p>Let&#39;s undo the replace and give it another go:</p> <p>Match: <em>^\* \(.+\)$</em><br />Replace: \,(+ 1 \#). \1</p> <p>This yields the desired result:</p> <pre class='prettyprint'>1. Foo<br />2. Foo<br />3. Foo<br />4. Foo<br />5. Foo<br />6. Foo<br />7. Foo<br />8. Foo</pre> <p>Now we can take a moment to bask in the glory of the lisp regex.<br /></p>Thu, 20 Sep 2012 20:35:59 +02002012-09-20T20:35:59+02:00430http://www.erikojebo.se/Code/Details/430webmaster@erikojebo.seTFS - Undoing Unchanged Files<p>If you find yourself with a pending changes window containing a bunch of files which have not actually been modified, <a href='http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f'>TFS Power Tools</a> can help you.</p> <p>If you install the power tools you get access to a bunch of useful stuff. To undo all unchanged files, run the following command in the root of the directory tree that should be affected by the command:</p> <pre class='prettyprint'>C:\Path\To\Source&gt; tfpt uu /r *</pre>Wed, 29 Aug 2012 13:11:31 +02002012-08-29T13:11:31+02:00429http://www.erikojebo.se/Code/Details/429webmaster@erikojebo.seUnblocking Multiple Assemblies at once in Windows 7<p>Windows 7 has the annoying habit of blocking all the .NET assemblies you download from the internet. An assembly can be removed by right clicking it and clicking the unlock button, but that quickly gets tedious if you have a bunch of assemblies.</p> <p>Enter SysInternals. To fix this quickly, download <a href='http://technet.microsoft.com/en-us/sysinternals/bb897440.aspx'>streams</a> and run it with the -d flag, like this:</p> <pre class='prettyprint'>streams -d *.dll</pre>Wed, 22 Aug 2012 20:51:44 +02002012-08-22T20:51:44+02:00355http://www.erikojebo.se/Code/Details/355webmaster@erikojebo.seNested Git Repositories<p>To add the files of a git repository within another git repository the trick is to add the subdirectory with a trailing slash:</p> <p><br /><pre class='prettyprint'>git add my-subdir/</pre></p> <p>This makes sure the actual files in the directory are added. Otherwise the repo is added as a submodule.</p>Thu, 16 Feb 2012 21:02:09 +01002012-02-16T21:02:09+01:00354http://www.erikojebo.se/Code/Details/354webmaster@erikojebo.seQuick Tip: Installing Conkeror on Ubuntu 11.10<p>Conkeror is a web browser which is extremely keyboard friendly. The default keyboard shortcuts are created to be familiar to Emacs users.</p> <p>To install Conkeror on Ubuntu 11.10 you first need to clone the repo:</p> <pre class='prettyprint'>git clone git://repo.or.cz/conkeror.git</pre> <p><br />To launch the application you then need to run the command: </p> <pre class='prettyprint'>firefox -app /path/to/conkeror/application.ini</pre> <p><br />The application.ini file lives in the root directory of your repository.</p> <p>Since the application is not installed in the usual way there is no Unity launcher entry for it. For info on how to create one read <a href='http://erikojebo.se/Post.aspx?id=353'>this post</a>.</p>Thu, 16 Feb 2012 10:46:15 +01002012-02-16T10:46:15+01:00353http://www.erikojebo.se/Code/Details/353webmaster@erikojebo.seQuick Tip: Adding an Application to the Unity Launcher in Ubuntu 11.10<p>If you have a custom application which does not have an entry in the Unity launcher you can not launch the application from the launcher and because of that you can not launch it from Gnome Do either.</p> <p>To add the application to the launcher you can create a applicaitonname.desktop file in ~/.local/share/applications containing the following text:</p> <pre class='prettyprint'>[Desktop Entry]<br />Name=Application name goes here<br />Comment=Comment goes here<br />Exec=/full/path/to/executable<br />Terminal=false<br />Type=Application<br />Icon=</pre> <p>You need to use the full path to the executable file. A path relative to the home directory does not work. I.e. ~/bin/application should be /home/username/application instead.</p> <p>You then need to make the file executable by running </p> <pre class='prettyprint'>chmod +x pathtofile</pre> <p>You can now launch this file from the Unity launcher. You have to restart Gnome Do For it to pick it up correctly.</p>Thu, 16 Feb 2012 10:38:44 +01002012-02-16T10:38:44+01:00346http://www.erikojebo.se/Code/Details/346webmaster@erikojebo.seQuick Tip: Copying Files that do not Exist in the Target Folder with XCOPY<p>Here is how you can copy only the files that do not already exist in the target directory from a given source directory. It will also copy all files that are newer in the source directory than in the target directory.</p> <pre class='prettyprint'>C:\&gt;xcopy c:\temp\testcopy\source c:\temp\testcopy\target /e /d</pre> <p><br /></p>Wed, 25 Jan 2012 13:22:46 +01002012-01-25T13:22:46+01:00345http://www.erikojebo.se/Code/Details/345webmaster@erikojebo.seLazy Regex in Visual Studio<p>For some reason everyone who implements a regex engine feels the need to tweak the syntax so that nobody feels at home. Worst of all regex implementations is the Visual Studio one.</p> <p>Here is a little reminder of the VS lazy regex syntax.</p> <p><br /><pre class='prettyprint'>/*</p> <p>| Posix | Visual studio |<br />|-------+---------------|<br />| .*? | .@ |<br />| .+? | .# |</p> <p>*/<br /></pre></p>Thu, 19 Jan 2012 16:37:22 +01002012-01-19T16:37:22+01:00344http://www.erikojebo.se/Code/Details/344webmaster@erikojebo.seInheritance is not an "is a" Relationship<p>One of the popular ways to explain inheritance in object oriented programming is that is represents an &quot;is a&quot; relationship. This can be true, but often it isn&#39;t.</p> <p>An example that Robert C. Martin likes to use is that in the real world a square is a rectangle, but it makes no sense to model that relationship with inheritance in an object oriented system.</p> <p>I came across another example the other day when working one of my hobby projects. The project is an ORM and I was adding basic convention support. You can specify your own conventions by implementing an IConvention interface. However, often you might not want to redefine all conventions. Instead you want to override only the ones you need to.</p> <p>To enable this there is a DefaultConvention class. You can derrive from this class and override the conventions you need to modify. So, inheritance is used as a means to solve a problem, but if you think of that inheritance relationship as an &quot;is a&quot; then it makes no sense.</p> <p>An overriding convention is NOT a default convention. It is actually the opposite of that.</p> <p>So, be careful when thinking about inheritance and think of it as a programming specific mechanism for coupling classes rather than a model of an &quot;is a&quot; relationship from the real world.</p>Wed, 28 Dec 2011 23:43:06 +01002011-12-28T23:43:06+01:00343http://www.erikojebo.se/Code/Details/343webmaster@erikojebo.seGetting a ListBoxItem to stretch horizontally in Silverlight<p>When you specify an ItemTemplate for a ListBox in silverlight, you can&#39;t get it to stretch horizontally in an easy way. Setting the HorizontalAlignment of the container in the template to stretch has no effect. What you have to do is to specify an ItemContainerStyle for the ListBox and set the HorizontalContentAlignment to Stretch:</p> <pre class='prettyprint'> <p>&lt;ListBox ItemsSource=&quot;{Binding Users}&quot; SelectedItem=&quot;{Binding SelectedUser, Mode=TwoWay}&quot;&gt;<br /> &lt;ListBox.ItemContainerStyle&gt;<br /> &lt;Style TargetType=&quot;ListBoxItem&quot;&gt;<br /> &lt;Setter Property=&quot;HorizontalContentAlignment&quot; Value=&quot;Stretch&quot; /&gt;<br /> &lt;/Style&gt;<br /> &lt;/ListBox.ItemContainerStyle&gt;<br /> &lt;ListBox.ItemTemplate&gt;<br /> &lt;DataTemplate&gt;<br /> &lt;Border BorderThickness=&quot;1&quot; BorderBrush=&quot;#CECECE&quot; Padding=&quot;5&quot; CornerRadius=&quot;3&quot; <br /> Background=&quot;{Binding Background}&quot;&gt;<br /> &lt;!-- Your stuff here --&gt;<br /> &lt;/Border&gt;<br /> &lt;/DataTemplate&gt;<br /> &lt;/ListBox.ItemTemplate&gt;<br />&lt;/ListBox&gt;</pre></p>Tue, 29 Nov 2011 08:05:53 +01002011-11-29T08:05:53+01:00342http://www.erikojebo.se/Code/Details/342webmaster@erikojebo.seRemoving the Suck from XML with Gosu.Commons: DynamicXmlParser<p>Ever been bored by writing yet another XML parser? Been annoyed by all the string conversions? Let&#39;s take a look at the DynamicXmlParser in <a href='https://github.com/erikojebo/gosu.commons/wiki'>Gosu.Commons</a>.</p> <p>So, let&#39;s say we have an XML document containing a book catalog:</p> <pre class='prettyprint'><br />&lt;?xml version=&#39;1.0&#39;?&gt;<br /> &lt;Catalog&gt;<br /> &lt;Book Id=&#39;123&#39;&gt;<br /> &lt;Title&gt;XML Developer&#39;s Guide&lt;/Title&gt;<br /> &lt;Author FirstName=&#39;Matthew&#39; LastName=&#39;Gambardella&#39; /&gt;<br /> &lt;Price&gt;44.95&lt;/Price&gt;<br /> &lt;PublishDate&gt;2000-10-01&lt;/PublishDate&gt;<br /> &lt;IsBetaRelease&gt;false&lt;/IsBetaRelease&gt;<br /> &lt;BookType&gt;Ebook&lt;/BookType&gt;<br /> &lt;/Book&gt;<br /> &lt;Book Id=&#39;456&#39;&gt;<br /> &lt;Title&gt;Build Awesome Command-Line Applications in Ruby&lt;/Title&gt;<br /> &lt;Author FirstName=&#39;David&#39; LastName=&#39;Copeland&#39; /&gt;<br /> &lt;Price&gt;20.00&lt;/Price&gt;<br /> &lt;PublishDate&gt;2012-03-01&lt;/PublishDate&gt;<br /> &lt;IsBetaRelease&gt;true&lt;/IsBetaRelease&gt;<br /> &lt;BookType&gt;Hardcover&lt;/BookType&gt;<br /> &lt;/Book&gt;<br /> &lt;/Catalog&gt;<br /></pre> <p>We want to cram that XML document into our domain objects:</p> <pre class='prettyprint'><br />public class Author<br />{<br /> public string FirstName { get; set; }<br /> public string LastName { get; set; } <p> public override string ToString()<br /> {<br /> return FirstName + &quot; &quot; + LastName;<br /> }<br />}</p> <p>public class Book<br />{<br /> public string Id { get; set; }<br /> public Author Author { get; set; }<br /> public decimal Price { get; set; }<br /> public bool IsBetaRelease { get; set; }<br /> public BookType BookType { get; set; }<br />}</p> <p>public enum BookType<br />{<br /> Ebook, Paperback, Hardcover<br />}<br /></pre></p> <p>So, usually we start hacking away with an XmlDocument or XDocument, try to dig our way down into the document models and then convert the strings into the correct datatype to be able to store them in our objects.</p> <p>That code is kind of boring. Instead, let&#39;s take advantage of the dynamic features of C# 4 to do away with that stuff. Gosu.Commons has an XML parser that does just that: DynamicXmlParser.</p> <p>Here is what the code looks like when using the DynamicXmlParser:</p> <pre class='prettyprint'><br />var parser = new DynamicXmlParser(); <p>var xmlCatalog = parser.Parse(xml);</p> <p>// Access the child elements of the catalog just as an ordinary collection property<br />foreach (var xmlBook in xmlCatalog.Books)<br />{<br /> var book = new Book<br /> {<br /> // Read attributes or element values as properties on an element<br /> // Values are automatically and implicitly converted to the appropriate type<br /> Id = xmlBook.Id, // int<br /> Author = new Author<br /> {<br /> FirstName = xmlBook.Author.FirstName, // string<br /> LastName = xmlBook.Author.LastName, // string<br /> },<br /> Price = xmlBook.Price, // decimal<br /> IsBetaRelease = xmlBook.IsBetaRelease, // bool<br /> BookType = xmlBook.BookType // BookType enum<br /> };</p> <p> Console.WriteLine(&quot;Book id: {0}, Author: {1}, Price: ${2}, IsBetaRelease: {3}, Book type: {4}&quot;, book.Id, book.Author, book.Price, book.IsBetaRelease, book.BookType);<br />}</pre></p> <p>Thanks to dynamic we can use ordinary property access syntax to find child elements of our catalog. Attributes or values of an element can be accessed the same way.</p> <p><br /><strong>Accessing child element collections</strong></p> <p>If you expect there to be multiple child elements with a given element name, those elements can be accessed as a collection property. In the example there are multiple Book elements in the catalog, so you can access them through xmlCatalog.Books.</p> <p>In the example, the Book elements are accessed by adding a plural &#39;s&#39; to the element name, i.e. &quot;Books&quot;. However this kind of access work with other plural forms as well:</p> <pre class='prettyprint'>[Test]<br />public void Collections_can_be_accessed_with_multiple_kinds_of_pluralization()<br />{<br /> var xml = @&quot;<br />&lt;Bag&gt;<br /> &lt;Car /&gt;<br /> &lt;Glas /&gt;<br /> &lt;Glas /&gt;<br /> &lt;Category /&gt;<br /> &lt;Category /&gt;<br /> &lt;Category /&gt;<br /> &lt;Octopus /&gt;<br /> &lt;Octopus /&gt;<br /> &lt;Octopus /&gt;<br /> &lt;Octopus /&gt;<br />&lt;/Bag&gt;<br />&quot;;<br /> var parser = new DynamicXmlParser(); <p> var bag = parser.Parse(xml);</p> <p> Assert.AreEqual(1, bag.Cars.Count); // ...s<br /> Assert.AreEqual(2, bag.Glasses.Count); // ...es<br /> Assert.AreEqual(3, bag.Categories.Count); // ...ies<br /> Assert.AreEqual(4, bag.OctopusElements.Count); // worst case, just postfix the word Elements<br />}</pre></p> <p>As the example shows, you can use a couple of differend pluralization forms. If none of them match your specific scenario, just use the element name and postfix it with &#39;Elements&#39;.</p> <p><br /><strong>Automatic conversions</strong></p> <p>If you try to set a typed variable or property to a value read from the parsed XML document that value is automatically, implicitly converted to the type of the variable or property that you are trying to assign to. The requirement is that the type you are assigning to has a defined conversion in the parser.</p> <p>Currently, default conversions exist for int, double, float, decimal, bool, TimeSpan, DateTime and enums. New conversions can easily be added and just as easily you can override the default conversions with your own.</p> <p>Here is an example of how to change the default conversion for boolean values so that it accepts &quot;0&quot; or &quot;1&quot; instead of &quot;false&quot; and &quot;true&quot;:</p> <pre class='prettyprint'>[Test]<br />public void Conversion_can_be_customized_for_any_type()<br />{<br /> var xml = @&quot;&lt;User Username=&#39;SomeName&#39; Password=&#39;secret&#39; IsAdmin=&#39;1&#39; /&gt;&quot;; <p> var parser = new DynamicXmlParser();</p> <p> parser.SetConverter(x =&gt;<br /> {<br /> if (x == &quot;1&quot;)<br /> return true;</p> <p> return false;<br /> });</p> <p> var user = parser.Parse(xml);</p> <p> Assert.IsTrue((bool)user.IsAdmin);<br />}</pre></p> <p>Implicit conversions can be done when using the value in a context where the expected type can be inferred, such as assigning to a variable or using the value in a method call. If you want to convert the value when the expected type cannot be inferred you can use an explicit cast.</p> <p>An example of this is shown in the example above where the value is used in an assertion. If the value was not explicitly cast in the call to Assert.IsTrue, then no conversion would be triggered and the value returned would actually be an instance of the class DynamixXmlElement.</p> <p><br /><strong>Namespaces</strong></p> <p>Every now and then you have to parse an XML document where someone has been so kind as to use the wonderful concept of XML namespaces. How do you tackle that one with this dynamic-schynamic thingie? The answer is quite simple, just add an alias for the namespace and which URI it represents. You can then access the properties and collections just as before, by prefixing the property name with the namespace alias.</p> <pre class='prettyprint'>[Test]<br />public void Elements_in_different_namespaces_can_be_accessed_by_prefixing_element_name_with_namespace()<br />{<br /> var xml =<br /> @&quot;&lt;?xml version=&#39;1.0&#39; encoding=&#39;UTF-8&#39; ?&gt;<br />&lt;!-- Here comes some XML --&gt;<br />&lt;Book xmlns=&#39;http://www.somesite.org/xml/DefaultNamespace&#39; <br /> xmlns:NS=&#39;http://www.somesite.org/xml&#39;&gt;<br /> &lt;Title&gt;The title&lt;/Title&gt;<br /> &lt;NS:Author&gt;<br /> &lt;NS:FirstName&gt;Steve&lt;/NS:FirstName&gt;<br /> &lt;NS:LastName&gt;Sanders&lt;/NS:LastName&gt;<br /> &lt;/NS:Author&gt;<br />&lt;/Book&gt;<br />&quot;;p<br /> var _parser = new DynamicXmlParser();<br /> <br /> _parser.SetNamespaceAlias(&quot;http://www.somesite.org/xml&quot;, &quot;NS&quot;); <p> var book = _parser.Parse(xml);</p> <p> Assert.AreEqual(&quot;The title&quot;, (string)book.Title);<br /> Assert.AreEqual(&quot;Steve&quot;, (string)book.NSAuthor.NSFirstName);<br /> Assert.AreEqual(&quot;Sanders&quot;, (string)book.NSAuthor.NSLastName);<br />}</pre></p> <p><br /><strong>Conclusion / Show me teh codez!</strong></p> <p>There you have it. Thanks to Microsoft for adding some dynamic love and care to C#. </p> <p>Gosu.Commons is an open source project of mine that is up at <a href='https://github.com/erikojebo/gosu.commons/wiki'>GitHub</a>. Feel free to poke around or even contribute. If you just want to use the thing, Gosu.Commons is also available on <a href='http://nuget.org/List/Packages/Gosu.Commons'>NuGet</a>. To add a reference, just open the package manager console and type:</p> <pre class='prettyprint'>PM&gt; Install-Package Gosu.Commons</pre> <p><br /></p>Thu, 17 Nov 2011 22:56:02 +01002011-11-17T22:56:02+01:00341http://www.erikojebo.se/Code/Details/341webmaster@erikojebo.seChanging Character Encoding for File in Emacs<p>Character encoding is a pain... If you find yourself wanting to change the encoding of a file, here is how to do it in emacs:</p> <p>C-x RET f utf-8 RET</p> <p>When you then save the file, it is written with the specified encoding. If you can&#39;t remember the exact name of the encoding type:</p> <p>C-x RET f TAB</p> <p>This will give you a list of all available encodings in the help buffer.</p>Tue, 15 Nov 2011 15:42:22 +01002011-11-15T15:42:22+01:00340http://www.erikojebo.se/Code/Details/340webmaster@erikojebo.seEmacs replace-regexp-fu<p>The time has come to take a deeper look at the super useful emacs function <em>replace-regexp</em>.</p> <p>The scenario:</p> <p>Let&#39;s say you have an XML document that looks something like this:</p> <pre class='prettyprint'>&lt;Person FirstName=&#39;Steve&#39; LastName=&#39;Smith&#39; Phone=&#39;555-12345&#39; Title=&#39;Mr.&#39; BirthDate=&#39;1950-01-01&#39; /&gt;</pre> <p>and you want to turn it into C# code, like this:</p> <pre class='prettyprint'><br />var p = new Person<br />{<br /> FirstName = &quot;Steve&quot;,<br /> LastName = &quot;Smith&quot;,<br /> Phone = &quot;555-12345&quot;,<br /> Title = &quot;Mr.&quot;,<br /> BirthDate = DateTime.Parse(&quot;1950-01-01&quot;),<br />}<br /></pre> <p>So, perfect time to put our regex skills to the test. Since emacs is my editor of choice, replace-regexp is what I&#39;ll use to get the job done. The regex for extracting the values we are interested in will look something like this:</p> <pre class='prettyprint'>&lt;Person FirstName=&#39;\(.*?\)&#39; LastName=&#39;\(.*?\)&#39; Phone=&#39;\(.*?\)&#39; Title=&#39;\(.*?\)&#39; BirthDate=&#39;\(.*?\)&#39; /&gt;</pre> <p>Note that you have to escape the parens to create a capture group and <strong>not</strong> be literal. This is kind of backwards compared to most other regex implementations, but comes in handy when performing search &amp; replace on Lisp code :)</p> <p>The replace string will look like this:</p> <pre class='prettyprint'>var p = new Person<br />{<br /> FirstName = &quot;\1&quot;,<br /> LastName = &quot;\2&quot;,<br /> Phone = &quot;\3&quot;,<br /> Title = &quot;\4&quot;,<br /> BirthDate = DateTime.Parse(&quot;\5&quot;)<br />}</pre> <p>Ok, so mission accomplished. However, a week later the format is extended to include the address of the person as well:</p> <pre class='prettyprint'><br />&lt;Person FirstName=&#39;Steve&#39; LastName=&#39;Smith&#39; Phone=&#39;555-12345&#39; Title=&#39;Mr.&#39; BirthDate=&#39;1950-01-01&#39;&gt;<br /> &lt;Address PostalCode=&#39;12345&#39; State=&#39;Florida&#39; City=&#39;Jacksonville&#39; Street=&#39;Some street&#39; /&gt;<br />&lt;/Person&gt;<br /></pre> <p>and the desired C# code:</p> <pre class='prettyprint'>var p = new Person<br />{<br /> FirstName = &quot;Steve&quot;,<br /> LastName = &quot;Smith&quot;,<br /> Phone = &quot;555-12345&quot;,<br /> Title = &quot;Mr.&quot;,<br /> BirthDate = DateTime.Parse(&quot;1950-01-01&quot;), <p> Address = new Address<br /> {<br /> PostalCode = &quot;12345&quot;,<br /> State = &quot;Florida&quot;,<br /> City = &quot;Jacksonville&quot;,<br /> Street = &quot;Some street&quot;<br /> }<br />}<br /></pre></p> <p>Multi-line time! The new regex now includes line breaks. To enter these, you can either write the expression outside of the mini-buffer and yank it in when executing the replace-regex command, or you can enter a newline in the minibuffer by typing C-q C-j. Here is the regex:</p> <pre class='prettyprint'>&lt;Person FirstName=&#39;\(.*?\)&#39; LastName=&#39;\(.*?\)&#39; Phone=&#39;\(.*?\)&#39; Title=&#39;\(.*?\)&#39; BirthDate=&#39;\(.*?\)&#39;&gt;<br /> &lt;Address PostalCode=&#39;\(.*?\)&#39; State=&#39;\(.*?\)&#39; City=&#39;\(.*?\)&#39; Street=&#39;\(.*?\)&#39; /&gt;<br />&lt;/Person&gt;</pre> <p>and the replace expression:</p> <pre class='prettyprint'>var p = new Person<br />{<br /> FirstName = &quot;\1&quot;,<br /> LastName = &quot;\2&quot;,<br /> Phone = &quot;\3&quot;,<br /> Title = &quot;\4&quot;,<br /> BirthDate = DateTime.Parse(&quot;\5&quot;), <p> Address = new Address<br /> {<br /> PostalCode = &quot;\6&quot;,<br /> State = &quot;\7&quot;,<br /> City = &quot;\8&quot;,<br /> Street = &quot;\9&quot;<br /> }<br />}</pre></p> <p>Still quite straight forward, as long as you get the newlines right in the search expression.</p> <p>Ok, so yet another week goes by, and now there is one small addition to the format: there should be a &quot;MiddleName&quot; attriute added to the person element:</p> <pre class='prettyprint'><br />&lt;Person FirstName=&#39;Steve&#39; MiddleName=&#39;F.&#39; LastName=&#39;Smith&#39; Phone=&#39;555-12345&#39; Title=&#39;Mr.&#39; BirthDate=&#39;1950-01-01&#39;&gt;<br /> &lt;Address PostalCode=&#39;12345&#39; State=&#39;Florida&#39; City=&#39;Jacksonville&#39; Street=&#39;Some street&#39; /&gt;<br />&lt;/Person&gt;<br /></pre> <p>Here is the matching C# code:</p> <pre class='prettyprint'>var p = new Person<br />{<br /> FirstName = &quot;Steve&quot;,<br /> MiddleName = &quot;F.&quot;<br /> LastName = &quot;Smith&quot;,<br /> Phone = &quot;555-12345&quot;,<br /> Title = &quot;Mr.&quot;,<br /> BirthDate = DateTime.Parse(&quot;1950-01-01&quot;), <p> Address = new Address<br /> {<br /> PostalCode = &quot;12345&quot;,<br /> State = &quot;Florida&quot;,<br /> City = &quot;Jacksonville&quot;,<br /> Street = &quot;Some street&quot;<br /> }<br />}<br /></pre></p> <p>Only a minor change, so the regex should only need a small tweak. However, adding this field brings us up to 10 match groups. If we continue with the same pattern and just add the tenth group and reference, like this:</p> <pre class='prettyprint'><br />var p = new Person<br />{<br /> FirstName = &quot;\1&quot;,<br /> MiddleName = &quot;\2&quot;,<br /> LastName = &quot;\3&quot;,<br /> Phone = &quot;\4&quot;,<br /> Title = &quot;\5&quot;,<br /> BirthDate = DateTime.Parse(&quot;\6&quot;), <p> Address = new Address<br /> {<br /> PostalCode = &quot;\7&quot;,<br /> State = &quot;\8&quot;,<br /> City = &quot;\9&quot;,<br /> Street = &quot;\10&quot;<br /> }<br />}<br /></pre></p> <p>The output from the replace will then be:</p> <pre class='prettyprint'><br />var p = new Person<br />{<br /> FirstName = &quot;Steve&quot;,<br /> MiddleName = &quot;F.&quot;,<br /> LastName = &quot;Smith&quot;,<br /> Phone = &quot;555-12345&quot;,<br /> Title = &quot;Mr.&quot;,<br /> BirthDate = DateTime.Parse(&quot;1950-01-01&quot;), <p> Address = new Address<br /> {<br /> PostalCode = &quot;12345&quot;,<br /> State = &quot;Florida&quot;,<br /> City = &quot;Jacksonville&quot;,<br /> Street = &quot;Steve0&quot;<br /> }<br />}<br /></pre></p> <p>If you take a closer look at the Street value, you see that it is actually &quot;Steve0&quot; which is not remotely what you would have wanted it to be. Instead of referencing the 10:th capture group it is actually a reference to the 1:st capture group immediately followed by a zero. The reason for this is that emacs only allows a single digit following the backslash.</p> <p>What to do now? We&#39;ll have to bring out the big guns. It&#39;s Lisp time!</p> <p>Emacs lets you embed lisp code within your replace expression, by escaping it with &quot;\,&quot;. A useful function for this case is match-string which takes an integer specifying the capture group number to reference. The new expression will then be:</p> <p><br /><pre class='prettyprint'>var p = new Person<br />{<br /> FirstName = &quot;\,(match-string 1)&quot;,<br /> MiddleName = &quot;\,(match-string 2)&quot;,<br /> LastName = &quot;\,(match-string 3)&quot;,<br /> Phone = &quot;\,(match-string 4)&quot;,<br /> Title = &quot;\,(match-string 5)&quot;,<br /> BirthDate = DateTime.Parse(&quot;\,(match-string 6)&quot;),</p> <p> Address = new Address<br /> {<br /> PostalCode = &quot;\,(match-string 7)&quot;,<br /> State = &quot;\,(match-string 8)&quot;,<br /> City = &quot;\,(match-string 9)&quot;,<br /> Street = &quot;\,(match-string 10)&quot;<br /> }<br />}</pre></p> <p>Tada! Now we&#39;re up and rolling again.</p> <p>For the fun of it, let&#39;s say that another week goes by and yet another attribute is added, this time it is NickName:</p> <pre class='prettyprint'>&lt;Person FirstName=&#39;Steve&#39; MiddleName=&#39;F.&#39; LastName=&#39;Smith&#39; NickName=&#39;Stevenizzle&#39; Phone=&#39;555-12345&#39; Title=&#39;Mr.&#39; BirthDate=&#39;1950-01-01&#39;&gt;<br /> &lt;Address PostalCode=&#39;12345&#39; State=&#39;Florida&#39; City=&#39;Jacksonville&#39; Street=&#39;Some street&#39; /&gt;<br />&lt;/Person&gt;</pre> <p>Since we&#39;ve removed the limitation of 9 capture groups we can just modify the regex to add the new capture group and reference.</p> <pre class='prettyprint'>&lt;Person FirstName=&#39;\(.*?\)&#39; MiddleName=&#39;\(.*?\)&#39; LastName=&#39;\(.*?\)&#39; NickName=&#39;\(.*?\)&#39; Phone=&#39;\(.*?\)&#39; Title=&#39;\(.*?\)&#39; BirthDate=&#39;\(.*?\)&#39;&gt;<br /> &lt;Address PostalCode=&#39;\(.*?\)&#39; State=&#39;\(.*?\)&#39; City=&#39;\(.*?\)&#39; Street=&#39;\(.*?\)&#39; /&gt;<br />&lt;/Person&gt;</pre> <pre class='prettyprint'>var p = new Person<br />{<br /> FirstName = &quot;\,(match-string 1)&quot;,<br /> MiddleName = &quot;\,(match-string 2)&quot;,<br /> LastName = &quot;\,(match-string 3)&quot;,<br /> NickName = &quot;\,(match-string 4)&quot;,<br /> Phone = &quot;\,(match-string 4)&quot;,<br /> Title = &quot;\,(match-string 5)&quot;,<br /> BirthDate = DateTime.Parse(&quot;\,(match-string 6)&quot;), <p> Address = new Address<br /> {<br /> PostalCode = &quot;\,(match-string 7)&quot;,<br /> State = &quot;\,(match-string 8)&quot;,<br /> City = &quot;\,(match-string 9)&quot;,<br /> Street = &quot;\,(match-string 10)&quot;<br /> }<br />}</pre></p> <p>As you can see, since we reference the groups by reference this means that we need to increment a bunch of numbers. This is a typically boring thing to do. Since we&#39;re already in the regex mindset, let&#39;s go regex on our regex and add 1 to all the numbers that need incrementing. Fortunately we already know how to embed lisp in our replace expression, so all we need to do is to hack away at some lovely lisp code.</p> <p>The search expression:</p> <pre class='prettyprint'>\([0-9]+\)</pre> <p>and the replace expression:</p> <pre class='prettyprint'>\,(+ 1 (string-to-number (match-string 1)))</pre> <p>By applying this regex and the replace expression above, from the second row referencing match group 4 and downwards, we get this beauty:</p> <pre class='prettyprint'>var p = new Person<br />{<br /> FirstName = &quot;\,(match-string 1)&quot;,<br /> MiddleName = &quot;\,(match-string 2)&quot;,<br /> LastName = &quot;\,(match-string 3)&quot;,<br /> NickName = &quot;\,(match-string 4)&quot;,<br /> Phone = &quot;\,(match-string 5)&quot;,<br /> Title = &quot;\,(match-string 6)&quot;,<br /> BirthDate = DateTime.Parse(&quot;\,(match-string 7)&quot;), <p> Address = new Address<br /> {<br /> PostalCode = &quot;\,(match-string 8)&quot;,<br /> State = &quot;\,(match-string 9)&quot;,<br /> City = &quot;\,(match-string 10)&quot;,<br /> Street = &quot;\,(match-string 11)&quot;<br /> }<br />}</pre></p> <p>which in turn gives us the final result:</p> <pre class='prettyprint'><br />var p = new Person<br />{<br /> FirstName = &quot;Steve&quot;,<br /> MiddleName = &quot;F.&quot;,<br /> LastName = &quot;Smith&quot;,<br /> NickName = &quot;Stevenizzle&quot;,<br /> Phone = &quot;555-12345&quot;,<br /> Title = &quot;Mr.&quot;,<br /> BirthDate = DateTime.Parse(&quot;1950-01-01&quot;), <p> Address = new Address<br /> {<br /> PostalCode = &quot;12345&quot;,<br /> State = &quot;Florida&quot;,<br /> City = &quot;Jacksonville&quot;,<br /> Street = &quot;Some street&quot;<br /> }<br />}<br /></pre></p> <p>So with the ability to embed Lisp code into your regexes, you are only limited by your imagination and possibly your Lisp skills :)</p>Mon, 14 Nov 2011 16:48:59 +01002011-11-14T16:48:59+01:00310http://www.erikojebo.se/Code/Details/310webmaster@erikojebo.seSearch and Replace with Unprintable Characters in Emacs<p>Lately I&#39;ve found myself doing a lot of search &amp; replace to restructure text. Quite often this involves changing whitespace in some way.</p> <p>For example, to include a newline character in the text to find, or in the replacement string, you simply enter a newline by using the chord C-q C-j.</p> <p>Here are a few useful whitespace chords:<br />Tab: C-q C-i<br />Linefeed: C-q C-j<br />Carriage return: C-q C-m</p>Fri, 10 Jun 2011 22:34:13 +02002011-06-10T22:34:13+02:00309http://www.erikojebo.se/Code/Details/309webmaster@erikojebo.seKeyboard Rectangle Selection in Visual Studio<p>Rectangle selection is one of the most time-saving tricks when editing large chunks of code, but the benefit is more or less lost if you have to reach for the mouse to do it.</p> <p>In Visual Studio you can hold down the alt button and use your mouse to select a rectangle of text. You can also use <strong>Control + Shift + arrow key</strong> to do this with the keyboard. If those shortcuts do not work on your machine, go to <em>Tools-&gt;Options-&gt;Environment-&gt;Keyboard</em> and add shortcuts for the following commands:</p> <p>Edit.LineDownExtendColumn<br />Edit.LineUpExtendColumn<br />Edit.CharLeftExtendColumn<br />Edit.CharRightExtendColumn</p>Thu, 19 May 2011 08:35:28 +02002011-05-19T08:35:28+02:00303http://www.erikojebo.se/Code/Details/303webmaster@erikojebo.seCleaner Test Setup using Builders<p>One of the most common causes for messy test code is setup code that reduces<br />the signal to noise ratio and makes you lose focus on the parts of the test<br />code that actually are important. I&#39;ve recently taken a liking to the builder<br />pattern as a way to reduce this problem.</p> <p>In this post I&#39;m going to compare a few different ways to write your setup<br />code, to illustrate the pros and cons of the different styles.</p> <p>To start off, let&#39;s look at some classic object construction code:</p> <pre class='prettyprint'>var comment1 = new Comment();<br />comment1.Body = &quot;Comment 1 body&quot;;<br />comment1.Date = new DateTime(2011, 1, 2, 3, 4, 5); <p>var comment2 = new Comment();<br />comment2.Body = &quot;Comment 2 body&quot;;<br />comment2.Date = new DateTime(2011, 1, 2, 3, 4, 5);</p> <p>var post = new Post();<br />post.Title = &quot;Title&quot;;<br />post.Body = &quot;Body&quot;;<br />post.Date = new DateTime(2011, 1, 2, 3, 4, 5);<br />post.AddComment(comment1);<br />post.AddComment(comment2);</pre></p> <p>It doesn&#39;t get more basic than that, but there is a quite a lot of noise. The<br />first step toward reducing that noise is to use the object initializer<br />syntax:</p> <pre class='prettyprint'>var comment1 = new Comment<br /> {<br /> Body = &quot;Comment 1 body&quot;,<br /> Date = new DateTime(2011, 1, 2, 3, 4, 5)<br /> }; <p>var comment2 = new Comment<br /> {<br /> Body = &quot;Comment 2 body&quot;,<br /> Date = new DateTime(2011, 1, 2, 3, 4, 5)<br /> };</p> <p>var post = new Post<br /> {<br /> Title = &quot;Title&quot;,<br /> Body = &quot;Body&quot;,<br /> Date = new DateTime(2011, 1, 2, 3, 4, 5)<br /> };</p> <p>post.AddComment(comment1);<br />post.AddComment(comment2);</pre></p> <p>I&#39;d say that the this syntax makes the important information stand out a bit<br />more. Another way is to use a constructor with named parameters and default<br />values:</p> <pre class='prettyprint'>var comment1 = new Comment(<br /> body: &quot;comment 1 body&quot;,<br /> date: new DateTime(2011, 1, 2, 3, 4, 5)); <p>var comment2 = new Comment(<br /> body: &quot;comment 2 body&quot;,<br /> date: new DateTime(2011, 1, 2, 3, 4, 5));</p> <p>var post = new Post(<br /> title: &quot;Title&quot;,<br /> body: &quot;Body&quot;,<br /> date: new DateTime(2011, 1, 2, 3, 4, 5));</p> <p>post.AddComment(comment1)<br />post.AddComment(comment2);</pre></p> <p>This approach is a bit more compact than the object initializer way. Apart<br />from the syntax, a major problem with both object initializers and<br />constructors with named arguments is that they force you to modify the<br />entities you want to create so that they work well with the test setup<br />code. In the case above that is not a real problem, but it becomes a problem<br />if you, for example, want to use certain default values when creating<br />instances for your tests that you do not want to use in the production<br />code. This is where the builder pattern comes in to the picture.</p> <p>A builder is a class whose sole purpose is to facilitate creation of instances<br />of a specific class. In this case we would probably use a PostBuilder and a<br />CommentBuilder. These classes can have all the helper methods you need so that<br />you can easily get instances for your test cases.</p> <p>Here is an example:</p> <pre class='prettyprint'>var post = new PostBuilder()<br /> .WithTitle(&quot;Title&quot;)<br /> .WithBody(&quot;Body&quot;)<br /> .WithDate(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .WithComment(new CommentBuilder()<br /> .WithBody(&quot;comment 1 body&quot;)<br /> .WithDate(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Build())<br /> .WithComment(new CommentBuilder()<br /> .WithBody(&quot;comment 2 body&quot;)<br /> .WithDate(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Build())<br /> .Build();</pre> <p>This style of programming has been quite popular in the .NET space for the<br />last two years or so. A fluent interface using daisy chaining of method calls<br />and method names chosen to give a prose like reading experience. However, this<br />style easily gets quite verbose and has fallen out of favor. The reason is<br />simple, all those &quot;With&quot;:s in the example above clutter up the code rather<br />than makeing it easier to read. A slightly more compact version could look<br />something like this:</p> <pre class='prettyprint'>var post = new PostBuilder()<br /> .Title(&quot;Title&quot;)<br /> .Body(&quot;Body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Comment(new CommentBuilder()<br /> .Body(&quot;comment 1 body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Build())<br /> .Comment(new CommentBuilder()<br /> .Body(&quot;comment 2 body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Build())<br /> .Build();</pre> <p>Now we&#39;re getting somewhere. There is less noise, but there are still a couple<br />of builder instantiations scattered around the code. The syntax could be<br />cleaned up a bit by introducing a nicer way to create the builders. Below is<br />an example with a static class which has properties for the different kinds of<br />builders. The factory class is called Build to make the code read a little<br />nicer.</p> <pre class='prettyprint'>var post = Build.Post<br /> .Title(&quot;Title&quot;)<br /> .Body(&quot;Body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Comment(Build.Comment<br /> .Body(&quot;comment 1 body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Build())<br /> .Comment(Build.Comment<br /> .Body(&quot;comment 2 body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Build())<br /> .Build();</pre> <p>Better. Two problems remaining are the duplication of the word Comment in<br />the call to the Comment method, and that annoying call to Build for the<br />comments. These problems could be addressed by creating a version of the<br />Comment method that takes a lambda operating on a builder as an argument:</p> <pre class='prettyprint'>var post = Build.Post<br /> .Title(&quot;Title&quot;)<br /> .Body(&quot;Body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5))<br /> .Comment(c =&gt; c<br /> .Body(&quot;comment 1 body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5)))<br /> .Comment(c =&gt; c<br /> .Body(&quot;comment 2 body&quot;)<br /> .Date(new DateTime(2011, 1, 2, 3, 4, 5)))<br /> .Build();</pre> <p>I&#39;d say this is even better. The only noise remaining is the duplication of<br />Date/DateTime when setting the date for a comment or a post and the &quot;c =&gt; c&quot; part<br />of the lambda. The implementation of the PostBuilder class now looks like<br />this:</p> <pre class='prettyprint'>public class PostBuilder<br />{<br /> private readonly Post _post = new Post();<br /> <br /> public PostBuilder Title(string title)<br /> {<br /> _post.Title = title;<br /> return this;<br /> } <p> public PostBuilder Body(string body)<br /> {<br /> _post.Body = body;<br /> return this;<br /> }</p> <p> public PostBuilder Date(DateTime date)<br /> {<br /> _post.Date = date;<br /> return this;<br /> }</p> <p> public Post Build()<br /> {<br /> return _post;<br /> }<br /> <br /> public PostBuilder Comment(Action&lt;CommentBuilder&gt; initializer)<br /> {<br /> var builder = new CommentBuilder();<br /> initializer(builder);<br /> <br /> var comment = builder.Build();<br /> _post.AddComment(comment);</p> <p> return this;<br /> }<br />}</pre></p> <p>I usually find myself using the same DateTime constructor, over and over<br />again. This cries out for refactoring. Now we can reap the benefits of using a<br />builder class, since we easily can add any helpers we need. In this case by<br />allowing the date to be set using the standard six double values for year,<br />month, day, hour, minute and second:</p> <pre class='prettyprint'>var post = Build.Post<br /> .Title(&quot;Title&quot;)<br /> .Body(&quot;Body&quot;)<br /> .Date(2011, 1, 2, 3, 4, 5)<br /> .Comment(c =&gt; c<br /> .Body(&quot;comment 1 body&quot;)<br /> .Date(2011, 1, 2, 3, 4, 5))<br /> .Comment(c =&gt; c<br /> .Body(&quot;comment 2 body&quot;)<br /> .Date(2011, 1, 2, 3, 4, 5))<br /> .Build();</pre> <p>Much better! The builder now contains the following method:</p> <pre class='prettyprint'>public PostBuilder Date(<br /> int year, int month, int day, <br /> int hour, int minute, int second)<br />{<br /> _post.Date = new DateTime(year, month, day, hour, minute, second);<br /> return this;<br />}</pre> <p>Ok, so now the setup code looks nice and tight, but the builder class contains<br />a nasty form of duplication. For each property that is to be exposed through<br />the builder there is a matching method:</p> <pre class='prettyprint'>public PostBuilder Title(string title)<br />{<br /> _post.Title = title;<br /> return this;<br />} <p>public PostBuilder Body(string body)<br />{<br /> _post.Body = body;<br /> return this;<br />}</pre></p> <p>This code is extremely tedious to write, especially if you have a large<br />application with a lot of entities. LISP eats this kind of duplication for<br />breakfast, as do Ruby, but it is often quite hard to remove in statically<br />typed languages which have no pre-processor or macro facilities.</p> <p>Fortunately, C# 4 includes the dynamic keyword which opens up new<br />possibilities for the static folks. All the dumb builder methods which set the<br />property with the same name as the method on the entity could easily be<br />replaced with a method missing hook:</p> <pre class='prettyprint'>public class DynamicBuilder&lt;T&gt; : DynamicObject <br /> where T : class, new()<br />{<br /> protected readonly T Entity = new T(); <p> // This method is called when you invoke a method that does not exist<br /> public override bool TryInvokeMember(<br /> InvokeMemberBinder binder, object[] args, out object result)<br /> {<br /> // Remember to return self to enable daisy chaining<br /> result = this;</p> <p> // Get the property on the entity that has the same name<br /> // as the method that was invoked<br /> var property = typeof(T).GetProperty(binder.Name);</p> <p> var propertyExists = property != null;</p> <p> if (propertyExists)<br /> {<br /> property.SetValue(Entity, args[0], null);<br /> }</p> <p> return propertyExists;<br /> }</p> <p> public T Build()<br /> {<br /> return Entity;<br /> }<br />}</pre></p> <p>Sweet! Now you can throw away most of your boring builder code, except for the<br />helpers that are tailor made for the specific entity type that your are building.</p> <p>The post builder now looks like this:</p> <pre class='prettyprint'>public class DynamicPostBuilder : DynamicBuilder&lt;Post&gt;<br />{<br /> public DynamicPostBuilder Date(<br /> int year, int month, int day,<br /> int hour, int minute, int second)<br /> {<br /> Entity.Date = new DateTime(year, month, day, hour, minute, second);<br /> return this;<br /> } <p> public DynamicPostBuilder Comment(Action&lt;dynamic&gt; initializer)<br /> {<br /> var builder = new DynamicCommentBuilder();<br /> <br /> initializer(builder);<br /> <br /> var comment = builder.Build();<br /> Entity.AddComment(comment);</p> <p> return this;<br /> }<br />}</pre></p> <p>The only downside to this is that you lose refactoring support and<br />intellisense, which can be a big deal for many .NET developers. However, if<br />you use TDD, the refactoring support should not be an issue, since you will<br />instantly know what was broken when something is renamed.</p> <p>The code for adding a comment looks suspiciously like a bit of code that might<br />get repeated in other builders. So there is another chance to, for example,<br />introduce a convention that would allow that code to be pushed down and<br />handled in the method missing hook of the base class. Only inconsistency and<br />lack of imagination set the limits in this case.</p> <p>To use the builder you have to make sure that the builder instance is typed as<br />dynamic, so that the compiler will get out of your way and allow you to call<br />the methods you want to call, even though they are not actually declared in<br />the builder class.</p> <p>In this case, that can be accomplished by modifying the builder factory class:</p> <pre class='prettyprint'>public class DynamicBuild<br />{<br /> public static dynamic Post<br /> {<br /> get { return new DynamicPostBuilder(); }<br /> }<br /> <br /> public static dynamic Comment<br /> {<br /> get { return new DynamicCommentBuilder(); }<br /> }<br />}</pre> <p>So, to sum up. Using the builder pattern allows you to clean up your test code<br />significantly and makes it trivial to add helpers when needed.</p> <p><strong>Original setup code:</strong></p> <pre class='prettyprint'>var comment1 = new Comment();<br />comment1.Body = &quot;Comment 1 body&quot;;<br />comment1.Date = new DateTime(2011, 1, 2, 3, 4, 5); <p>var comment2 = new Comment();<br />comment2.Body = &quot;Comment 2 body&quot;;<br />comment2.Date = new DateTime(2011, 1, 2, 3, 4, 5);</p> <p>var post = new Post();<br />post.Title = &quot;Title&quot;;<br />post.Body = &quot;Body&quot;;<br />post.Date = new DateTime(2011, 1, 2, 3, 4, 5);<br />post.AddComment(comment1);<br />post.AddComment(comment2);</pre></p> <p><strong>Builder based setup code:</strong></p> <pre class='prettyprint'>var post = Build.Post<br /> .Title(&quot;Title&quot;)<br /> .Body(&quot;Body&quot;)<br /> .Date(2011, 1, 2, 3, 4, 5)<br /> .Comment(c =&gt; c<br /> .Body(&quot;comment 1 body&quot;)<br /> .Date(2011, 1, 2, 3, 4, 5))<br /> .Comment(c =&gt; c<br /> .Body(&quot;comment 2 body&quot;)<br /> .Date(2011, 1, 2, 3, 4, 5))<br /> .Build();</pre> <p>Happy building!<br /></p>Thu, 03 Feb 2011 20:35:05 +01002011-02-03T20:35:05+01:00302http://www.erikojebo.se/Code/Details/302webmaster@erikojebo.seQuick Tip: Finding apt-get Install Location<p>To find where apt-get installed a given package just run the following command:</p> <p><br /><pre class='prettyprint'>sudo dpkg -L packagename</pre></p>Mon, 24 Jan 2011 21:00:49 +01002011-01-24T21:00:49+01:00295http://www.erikojebo.se/Code/Details/295webmaster@erikojebo.seGit Commit Script for the Lazy Typist<p>If you are a git user, you are probably a fan of the command line, and you probably use aliases to make your daily work more efficient. One of my most frequently used aliases is <em>gc</em>, which is an alias for <em>git commit -m</em>.</p> <p><br /><pre class='prettyprint'>alias gc=&#39;git commit -m&#39;</pre></p> <p><br />However, one shortcoming of this alias is that you still have to add the double quotes, which is tedious. </p> <p><br /><pre class='prettyprint'>gc &quot;this is a commit message&quot;</pre></p> <p><br />There are several ways of solving that problem, and I chose to do it by writing a small ruby script:</p> <p><br /><pre class='prettyprint'>#!/usr/bin/ruby</p> <p># Performs a git commit with all the command line arguments as a message, <br /># for example:<br /># gcm this is a commit message =&gt; git commit -m &quot;this is a commit message&quot;</p> <p>message = ARGV.join(&quot; &quot;)<br />%x[git commit -m &#39;#{message}&#39;]</pre></p> <p><br />By saving this script with a suitable name in some directory that is part of your path you can now commit with messages without quotes.</p> <p>The commit shown above now becomes:</p> <p><br /><pre class='prettyprint'>gc this is a commit message</pre></p> <p><br />You can find this script and more <a href='https://github.com/erikojebo/configuration'>up at github</a> where I keep my configuration, such as bash scripts and dot files.</p>Sat, 04 Dec 2010 16:59:42 +01002010-12-04T16:59:42+01:00294http://www.erikojebo.se/Code/Details/294webmaster@erikojebo.seGit-removing Files Marked as Deleted<p>If you work in an environment where not all file operations are made through git you probably have been in the situation that files have been deleted by using just plain rm rather than git rm, so you have to manually do a git rm for each deleted file and then commit the deletion.</p> <p>To make this step a little bit easier you can use the following command:</p> <p><br /><pre class='prettyprint'>git rm $(git ls-files -d)</pre></p> <p><br />It might be a good idea to either alias this command or put it in a shell script somewhere in your path, to make it easier to use.</p> <p><strong>Edit:</strong><br />There is actually a much better way to solve the problem where the deletions are not added, which is to add the --all flag to the git add:</p> <p><br /><pre class='prettyprint'>git add . --all</pre></p>Tue, 23 Nov 2010 08:46:01 +01002010-11-23T08:46:01+01:00293http://www.erikojebo.se/Code/Details/293webmaster@erikojebo.seMake Alt-Tab Useful by Disabling Aero Peek in Windows 7<p>The default behaviour of Alt-tab in Windows 7 is simply horrible to use. Fortunately you can easily disable Aero Peek, as it is called, to go back to a more traditional style of alt-tabbing.</p> <p>Go to &quot;Adjust the appearance and performance of windows&quot;, by typing it in in the start menu search, and then simply uncheck &quot;Aero peek&quot;.</p>Mon, 22 Nov 2010 10:17:53 +01002010-11-22T10:17:53+01:00288http://www.erikojebo.se/Code/Details/288webmaster@erikojebo.seQuick Tip: Useful Shortcuts for your Everyday Windows Usage<p><strong>Windows in general:</strong><br />Windows + E: Open Windows explorer<br />Windows + R: Open Run dialog<br />Windows + D: Minimize/restore all (show desktop)<br />Ctrl + Shift + Esc: Open process explorer</p> <p><strong>Explorer:</strong><br />F10 or Alt: Show menu bar<br />F4: Move focus to adress bar<br />F6: Move focus to next pane<br />Alt + Space: Open the system menu for the windod (minimize, maximize, etc)<br />Ctrl + e or F3: Move focus to search bar<br />Shift + F10: Open right click menu for current item<br />Ctrl + w: Close the current window<br />F11: Full screen mode<br />Shift + F11: Exit full screen mode</p> <p><strong>Windows 7:</strong><br />Windows + 1 to 9: Pinned applications<br />Windows + Left: Dock current window left<br />Windows + Right: Dock current window right<br />Windows + Up: Maximize current window<br />Windows + Down: Restore current window<br />Windows + Shift + Left: Move to screen to the left<br />Windows + Shift + Right: Move to screen to the right</p>Thu, 04 Nov 2010 09:33:03 +01002010-11-04T09:33:03+01:00287http://www.erikojebo.se/Code/Details/287webmaster@erikojebo.seAssertion Classes: Reuse and Expressiveness instead of Test Class Inheritance Hierarchies<p>If you have several classes which have some aspect in common, you probably have test classes that have similar tests as well. These similarities often lead to duplication that needs to be removed. One way to remove this duplication is to extract a common super class for the test classes, but inheritance can be a quite inflexible way of sharing behaviour between classes. For example, what happens when you have a test class that share behaviour with not only one other class, but two?</p> <p>Due to the limitations of inheritance, composition is often a better choice. So, how can you use composition to share functionality between test classes? Let&#39;s look at an example from the world of Windows Presentation Foundation and MVVM.</p> <p>Here is a typical view model class, which implements the interface INotifyPropertyChanged. If you are using MVVM you probably have quite a lot of classes implementing this interface.</p> <pre class='prettyprint'>public class ViewModel : INotifyPropertyChanged<br />{<br /> private string _someProperty; <p> public string SomeProperty<br /> {<br /> get { return _someProperty; }<br /> set<br /> {<br /> _someProperty = value;<br /> RaisePropertyChanged(&quot;SomeProperty&quot;);<br /> }<br /> }</p> <p> public void RaisePropertyChanged(string propertyName)<br /> {<br /> if (PropertyChanged != null)<br /> {<br /> PropertyChanged(this, new PropertyChangedEventArgs(propertyName));<br /> }<br /> }</p> <p> public event PropertyChangedEventHandler PropertyChanged;<br />}</pre></p> <p>Here is a simple test class, with a single test case that verifies that the property in the view model actually raises the PropertyChanged event when changed.</p> <pre class='prettyprint'>[TestFixture]<br />public class ViewModelTests<br />{<br /> private ViewModel _viewModel;<br /> private string _actualPropertyName;<br /> private int _eventsRaised; <p> [SetUp]<br /> public void SetUp()<br /> {<br /> _viewModel = new ViewModel();<br /> _eventsRaised = 0;<br /> _actualPropertyname = &quot;&quot;;<br /> }</p> <p> [Test]<br /> public void Setting_SomeProperty_raises_property_changed_event()<br /> {<br /> StartListeningForPropertyChangedEvent();</p> <p> _viewModel.SomeProperty = &quot;new value&quot;;</p> <p> AssertSinglePropertyChangedEventWasFiredFor(&quot;SomeProperty&quot;);<br /> }</p> <p> private void StartListeningForPropertyChangedEvent()<br /> {<br /> _viewModel.PropertyChanged += (s, e) =&gt;<br /> {<br /> _eventsRaised++;<br /> _actualPropertyName = e.PropertyName;<br /> };<br /> }</p> <p> private void AssertSinglePropertyChangedEventWasFiredFor(string expectedPropertyName)<br /> {<br /> Assert.AreEqual(1, _eventsRaised, &quot;number of property changed events raised&quot;);<br /> Assert.AreEqual(expectedPropertyName, _actualPropertyName);<br /> }<br />}</pre></p> <p>Now imagine that you write another view model class. You will probably need to write test cases for that class that are almost identical to the test case above. You could extract the helper methods to a super class, but as mentioned above, that is not an ideal solution.</p> <p>So, now to the good stuff. A simple way of cleaning up the test class above is to extract another class which handles everything that has to do with asserting that a property raises the property changed event. </p> <pre class='prettyprint'>public class ExpectPropertyChanged<br />{<br /> private string _actualPropertyName;<br /> private int _eventsRaised;<br /> private string _expectedPropertyName; <p> private ExpectPropertyChanged(INotifyPropertyChanged sender)<br /> {<br /> sender.PropertyChanged += (s, e) =&gt;<br /> {<br /> _eventsRaised++;<br /> _actualPropertyName = e.PropertyName;<br /> };<br /> }</p> <p> public static ExpectPropertyChanged On(INotifyPropertyChanged sender)<br /> {<br /> return new ExpectPropertyChanged(sender);<br /> }</p> <p> public ExpectPropertyChanged ForProperty(string expectedPropertyName)<br /> {<br /> _expectedPropertyName = expectedPropertyName;<br /> return this;<br /> }</p> <p> public void During(Action action)<br /> {<br /> action.Invoke();</p> <p> Assert.AreEqual(1, _eventsRaised, &quot;number of property changed events raised&quot;);<br /> Assert.AreEqual(_expectedPropertyName, _actualPropertyName);<br /> }<br />}</pre></p> <p>This removes the need for the helper methods in the test class and enables you to use the same assertion code in any other test class without having to tie the classes together via a static type relationship. Here is the test class after the refactoring:</p> <pre class='prettyprint'>[TestFixture]<br />public class ViewModelTests<br />{<br /> private ViewModel _viewModel; <p> [SetUp]<br /> public void SetUp()<br /> {<br /> _viewModel = new ViewModel();<br /> }</p> <p> [Test]<br /> public void Setting_SomeProperty_raises_property_changed_event()<br /> {<br /> ExpectPropertyChanged<br /> .On(_viewModel)<br /> .ForProperty(&quot;SomeProperty&quot;)<br /> .During(() =&gt; _viewModel.SomeProperty = &quot;new value&quot;);<br /> }<br />}</pre></p> <p>So, why would you bother with this stuff? First of all, it let&#39;s you reuse code much easier than through inheritance. Apart from that it also simplifies your actual test classes since you can extract lots of boring helper methods into their own classes, which improves the signal to noise ratio significantly. Finally it also gives you a new level of expressiveness by naming the assertion class and introducing methods on the assertions which explain the meaning of each value used. This last point also solves the problem with composite assertions that take several parameters. By introducing an assertion class you can set each value using a method with a self documenting name, which makes it obvious what the value means, instead of having to try your best to guess what the third integer parameter does.<br /></p>Thu, 28 Oct 2010 22:38:32 +02002010-10-28T22:38:32+02:00286http://www.erikojebo.se/Code/Details/286webmaster@erikojebo.seColorizing Your Git Output<p>If the output from your git commands is not colorized, it is time to bring out the old git config.</p> <p>Either you configure git with the git config commands from a shell:</p> <pre class='prettyprint'>git config --global color.branch auto<br />git config --global color.diff auto<br />git config --global color.interactive auto<br />git config --global color.status auto</pre> <p>or you open up ~/.gitconfig in your favourite text editor and add the following section:</p> <pre class='prettyprint'>[color]<br /> branch = auto<br /> diff = auto<br /> interactive = auto<br /> status = auto</pre> <p>Given that you have a decent shell you can now enjoy the full glory of your git output.</p>Thu, 28 Oct 2010 20:46:45 +02002010-10-28T20:46:45+02:00285http://www.erikojebo.se/Code/Details/285webmaster@erikojebo.seSetting the Default Startup Directory for your Cygwin Bash<p>To change the path in which your bash shell is started you can edit your .bashrc file.<br />To do this, open cygwin and simply open the file <em>~/.bashrc</em> in your favourite text editor and add a cd command at the bottom.</p>Fri, 15 Oct 2010 08:13:20 +02002010-10-15T08:13:20+02:00284http://www.erikojebo.se/Code/Details/284webmaster@erikojebo.seGetting Emacs to Behave under Cygwin<p>If you install emacs under cygwin, there is a risk that you won&#39;t be able to close emacs with the usual C-x C-c, since it is mapped to C-x C-g for some reason. To fix this, add an environment variable called <em>CYGWIN</em> with the value <em>binmode ntsec tty </em>.</p>Sat, 02 Oct 2010 17:53:48 +02002010-10-02T17:53:48+02:00281http://www.erikojebo.se/Code/Details/281webmaster@erikojebo.seSet Up Shared and Local Git Repository<p>In my current hobby project I&#39;m using git with a shared repository on an external hard drive, and a local repository on my primary disk.</p> <p>To set up an empty shared repository, use the following command:</p> <p><br /><pre class='prettyprint'>git init --bare --shared</pre></p> <p>This tells git to create a shared repository that is not supposed to have a working copy of the code, just to be pushed to and pulled from.</p> <p>You then clone the shared repository on your primary disk by executing the git clone command:</p> <p><br /><pre class='prettyprint'>git clone /path/to/shared/repository</pre></p> <p>You will get a warning about that you have cloned an empty repository, but that isn&#39;t a problem.</p> <p>Now you are all set to add your first file and do your first commit in the new repostiory. For example:</p> <p><br /><pre class='prettyprint'>touch todo.txt<br />git add .<br />git commit -m &#39;Added todo list&#39;</pre></p> <p>You now have something that you can push to the shared repository. If you try to push your commits to the shared repository using </p> <pre class='prettyprint'>git push origin</pre> <p> you will get an error message saying <em>No refs in common and none specified; doing nothing; Perhaps you should specify a branch such as &#39;master&#39;</em>. This is because of that the remote repository is completely empty, and has no branches. To push your commits you have to explicitly specify the name of the branch, i.e. <em>master</em>. </p> <pre class='prettyprint'>git push origin master</pre> <p><br />Now you have pushed your first commits to the remote repository and you can now push and pull like nobody&#39;s business.<br /></p>Thu, 09 Sep 2010 20:15:55 +02002010-09-09T20:15:55+02:00280http://www.erikojebo.se/Code/Details/280webmaster@erikojebo.seUse ReSharper to Quickly Generate GUIDs<p>Everyone who has used the GUID generation tool in Visual Studio knows that it is quite limited in the GUID formats it can generate. If you are a ReSharper user there is a much more enjoyable way to generate GUIDs. Anywhere in your code or in an XML file, type <strong>nguid</strong> and press tab to expand the snippet. That will generate a new GUID and let you choose between a number of formats.</p>Wed, 08 Sep 2010 21:31:49 +02002010-09-08T21:31:49+02:00278http://www.erikojebo.se/Code/Details/278webmaster@erikojebo.seGetting Current System Uptime under Windows<p>If you are trying to remember when you came to work, one trick is to check your system uptime:</p> <p>In a command prompt:<br />- Run the command &quot;net statistics server&quot; or &quot;net stats srv&quot;<br />- Check the first line of the output, which says &quot;Statistics since …&quot; which shows when you booted</p>Mon, 06 Sep 2010 09:29:04 +02002010-09-06T09:29:04+02:00274http://www.erikojebo.se/Code/Details/274webmaster@erikojebo.seRemoving Regions in Visual Studio<p>Some people are truly in love with the whole #region thing in Visual Studio. Personally I can&#39;t say I share that sentiment. Because of this I found myself wanting to quickly remove all regions in a solution. The first solution that came to mind was our old friend the regular expression.</p> <p>To remove all regions, open the Find &amp; replace dialog (Ctrl-H), choose the appropriate scope (current document, entire solution etc) and click the &quot;Use regular expressions&quot; checkbox. Enter the following regular expression in the Find field:</p> <pre class='prettyprint'>\#(end)*region.*$</pre> <p>Make sure the Replace field is empty, and hit the Replace All button. Tada!</p>Fri, 27 Aug 2010 15:09:41 +02002010-08-27T15:09:41+02:00256http://www.erikojebo.se/Code/Details/256webmaster@erikojebo.seSearch & Replace with Visual Studio Regular Expressions<p>Regular expressions are an awesome tool, but unfortunately every language/environment has its own dialect, including Visual Studio.</p> <p>Visual Studio uses {} to indicate a group to capture. A captured group can later be referenced by using backslash and the match number (one based).</p> <p>As an example, let&#39;s say you want to do the following replacement:</p> <pre class='prettyprint'>// Original<br />var person = new Person(&quot;Frank&quot;); <p>// Expected result<br />var person = new Person { Name = &quot;Frank&quot; };</pre></p> <p>To achieve this you can use the following search and replace patterns in Visual Studio:</p> <p>Search pattern: <strong>var person = new Person\({:q}\);</strong><br />Replace pattern: <strong>var person = new Person { Name = \1 };</strong></p> <p>:q is a special Visual Studio regex symbol for matching a quoted string. Another custom VS symbol is :z which matches an integer.</p>Thu, 24 Jun 2010 10:27:46 +02002010-06-24T10:27:46+02:00247http://www.erikojebo.se/Code/Details/247webmaster@erikojebo.seGetting Started with FitNesse<p>If you haven&#39;t heard of FitNesse, it is Unclebob&#39;s acceptance testing framework which let&#39;s you specify test cases as wiki pages. These test cases then interact with fixture classes that call the system under test.</p> <p>The first step in getting started with FitNesse is to download it. You only need one file, FitNesse.jar, which can be found at <a href='http://fitnesse.org/FrontPage.FitNesseDevelopment.DownLoad'>the FitNesse wiki</a>.</p> <p>Download fitnesse.jar and put it in your project&#39;s lib directory. For the purpose of this blog post we will assume a directory structure that looks like this:</p> <pre class='prettyprint'>c:- code<br /> - hellofitnesse<br /> - bin<br /> - fitnesse<br /> - lib<br /> - src</pre> <p>So, now the lib folder contains fitnesse.jar.</p> <p>Since FitNesse is based on a wiki, it runs on its own web server. To start fitnesse go to the hellofitnesse directory and execute the following command:</p> <p><br /><pre class='prettyprint'>java -jar lib/fitnesse.jar -p 8080 -r fitnesse</pre></p> <p><br />This command starts fitnesse and tells it to listen to the port 8080. It also tells it to unpack all the files that it needs in the folder fitnesse.</p> <p>You can now open up a browser and go to localhost:8080. You should then see the default FitNesse wiki.</p> <p>Edit the front page by clicking the edit link in the menu to the left and add the following line:</p> <pre class='prettyprint'>!define TEST_SYSTEM {slim} <p>!path lib/fitnesse.jar<br />!path bin</p> <p>&gt;HelloFitnesse</pre></p> <p>The first line tells FitNesse to use slim to run the tests. The path statements tell FitNesse where to find fitnesse.jar and the .class files for your fixtures (change this path if your .class files will be generated into some other directory).</p> <p>This last line creates a link to a child page called HelloFitnesse. When you save the page, a little question mark will appear after HelloFittnesse. Click the question mark to create the page.</p> <p>This page is now just an ordinary wiki page. To make the page into a test suite page click the properties link in the menu, and select Suite.</p> <p>Edit the HelloFitnesse page and add the following row:</p> <pre class='prettyprint'>&gt;MyFirstTest</pre> <p>Create the page and change its properties to make it into a test page.</p> <p>Edit the MyFirstTest page and add the following code to create a simple test:</p> <pre class='prettyprint'>|import |<br />|fixtures|<br /> <br />!|AddFixture |<br />|a |b |sum? |<br />|1 |2 |3 |<br />|2 |2 |3 |</pre> <p>The first two lines defines a table with all the packages to import. The second table defines the actual test. The first row specifies the name of the fixture class to run. The columns a and b represent state that should be set in the fixture, and the column sum? represents a method call which returns the result that should be verified.</p> <p>The values a and b are set by calling the methods setA and setB with the corresponding values. When both a and b are set the method sum() will be called and the return value will be compared to the expected value.</p> <p>You can now run the test by clicking the Test link in the menu. However, since the actual test fixture has not been written yet you will get an exception that looks something like this:</p> <p><em>Could not invoke constructor for AddFixture[0]</em></p> <p>So, the next step is to write the fixture class.</p> <pre class='prettyprint'>package fixtures; <p>public class AddFixture {<br /> private int _a;<br /> private int _b;</p> <p> public void setA(int a) {<br /> _a = a;<br /> }</p> <p> public void setB(int b) {<br /> _b = b;<br /> }</p> <p> public int sum() {<br /> return _a + _b;<br /> }<br />}</pre></p> <p>Compile the fixture class and make sure the AddFixture.class file is actually located in bin/fixtures/ folder (if bin is your output folder, specified in the !path directive earlier). You should now be able to run the test and get the proper response, one row should succeed and one should fail.</p>Fri, 16 Apr 2010 22:05:54 +02002010-04-16T22:05:54+02:00240http://www.erikojebo.se/Code/Details/240webmaster@erikojebo.seFixing Default Indentation in Resharper<p>The default indentation rules for Resharper are quite ugly. Both object/collection initializers and anonymous methods are indented way to much. Fortunately this can be fixed.</p> <p>To fix the indentation of anonymous methods, go to Resharper-&gt;Options-&gt;Braces Layout and choose &quot;BSD style&quot; for &quot;Anonymous method declaration&quot;. Then go to Other and deselect the option &quot;Indent anonymous method body&quot;.</p> <p>Fixing the object/collection initializer indentation is quite similar. Go to Resharper-&gt;Options-&gt;Braces Layout and choose &quot;BSD style&quot; for &quot;Array oand object initializer&quot;. Then go to Other and set &quot;Continous line indent multiplier&quot; to 1.</p>Wed, 31 Mar 2010 09:49:21 +02002010-03-31T09:49:21+02:00239http://www.erikojebo.se/Code/Details/239webmaster@erikojebo.seThe Importance of Nomenclature<p>Naming is a crucial part of programming, and is probably one of the most challenging parts of the craft. The Behaviour Driven Development movement currently does its best to bring attention to the fact that terminology and naming not only effects the way we read and understand code, it also effects the way approach problems and thereby also the solutions that we come up with.</p> <p>An example of how terminology influences the code you write is the naming of the attribute which specifies that a method is a test method. NUnit uses the traditional jUnit style by using the [Test] attribute. However, xUnit.NET has chosen to call their attribute [Fact]. This simple change is quite effective in pointing you in the right direction when you sit down to write your unit test.</p> <p>The [Test] attribute is quite weak since it does not say much, except that the method is to be executed by the test runner. For example, a common style of writing unit test is to do something like this:</p> <pre class='prettyprint'>[Test]<br />public void TestPush()<br />{<br /> // Create a stack instance<br /> // Do some stuff with it <br /> // Write a bunch of asserts<br />}</pre> <p>The [Fact] attribute on the other hand sets quite a firm expectation on the test method. If you change the [Test] attribute above to a [Fact] attribute it makes no sense. TestPop is certainly not a fact. You are pushed to write tests with names that actually state some fact about the code under test. To be able to write a fact about the code you are testing you also have to chose a more narrow scope, probably with a single logical assert.</p> <pre class='prettyprint'>[Fact]<br />public void GivenEmptyStack_WhenPushing_ThenSizeIsOne()<br />{<br /> Stack stack = new Stack();<br /> stack.Push(0);<br /> Assert.Equal(1, stack.Size);<br />}</pre>Mon, 29 Mar 2010 20:33:18 +02002010-03-29T20:33:18+02:00238http://www.erikojebo.se/Code/Details/238webmaster@erikojebo.seImplementation of a Micro DSL<p>I&#39;ve previously touch on the topic of using extension methods to clean up the syntax of your unit tests. That approach can give you very expressive and readable tests, for example:</p> <pre class='prettyprint'>[Test]<br />public void Validate_RequestWithValidPath_Ok()<br />{<br /> &quot;GET /valid/path.html HTTP/1.1&quot;<br /> .ShouldValidateTo(RequestValidationStatus.Ok);<br />}</pre> <p>The drawback of using extension methods is when you want to have optional state setup methods that you can chain together to form an expression.</p> <pre class='prettyprint'>[Test]<br />public void Validate_RequestWithValidPath_Ok()<br />{<br /> &quot;GET /valid/path.html HTTP/1.1&quot;<br /> .UsingPathChecker(new AlwaysValidPathChecker())<br /> .ShouldValidateTo(RequestValidationStatus.Ok);<br />}</pre> <p>If the UsingPathChecker method in the example above needs to be optional you have to supply two extension methods to the string class, UsingPathChecker and ShouldValidateTo. You also need a class that is used as a return value of the UsingPathChecker method call, which holds the state and also contains a ShouldValidateTo method.</p> <p>As you can see, it can quickly get a bit messy. To get rid of some of the mess, you can use a simple factory method with a well chosen name.</p> <pre class='prettyprint'>[Test]<br />public void Validate_RequestWithValidPath_Ok()<br />{<br /> TheRequest(&quot;GET /valid/path.html HTTP/1.1&quot;)<br /> .UsingPathChecker(new AlwaysValidPathChecker())<br /> .ShouldValidateTo(RequestValidationStatus.Ok);<br />}</pre> <p>The method TheRequest in this example simply returns a new instance of the helper class which holds the state, and thereby eliminates the need for the extension methods. You now only have one place where you can put all the stuff you need to create your little DSL.</p> <p>Here is an example of what the helper class could look like:</p> <pre class='prettyprint'>class HttpRequestValidatorSpecsIntermediary<br />{<br /> string _input;<br /> private HttpRequestValidator _validator; <p> internal HttpRequestValidatorSpecsIntermediary(string input)<br /> {<br /> _input = input;<br /> _validator = new HttpRequestValidator();<br /> }</p> <p> internal HttpRequestValidatorSpecsIntermediary UsingPathChecker(IPathChecker pathChecker)<br /> {<br /> _validator = new HttpRequestValidator(pathChecker);</p> <p> return this;<br /> }</p> <p> internal void ShouldValidateTo(RequestValidationStatus expectedStatus)<br /> {<br /> var actualValidationStatus = _validator.Validate(new HttpRequest(_input));</p> <p> Assert.AreEqual(expectedStatus, actualValidationStatus);<br /> }<br />}</pre></p>Mon, 29 Mar 2010 19:56:21 +02002010-03-29T19:56:21+02:00236http://www.erikojebo.se/Code/Details/236webmaster@erikojebo.seLessons Learned from Uncle Bob<p>Earlier this week I spent three days with Robert C. Martin, also known as Uncle Bob. He was in Stockholm giving a course on Advanced TDD, which I had the privilege to attend to. Hearing Uncle Bob talk about TDD for three whole days lead to quite a few thoughts and insights. This post is an attempt to summarize the main points I took away from his course.</p> <p><br /><strong>Don&#39;t try to convince others, convince yourself</strong></p> <p>As a test driven developer on a team with non-test driven colleagues you have probably asked yourself the question &quot;how do I convince the others to take up TDD?&quot;. The answer to this question is that you probably can&#39;t convince anyone to start doing TDD. That decision has to come from each and everyone themselves. What you can do is to make sure that you are really convinced of the benefits of TDD, yourself. If you are, it will shine through. As time passes the quality and effectiveness of your work will hopefully speak for itself, and help inspire the other developers.</p> <p>By focusing on following test driven practices yourself and being open for sharing your point of view when anyone shows interest you can slowly but surely spread the knowledge.</p> <p><br /><strong>Tests are truly first class citizens</strong></p> <p>Tests are important. Very important. The test code deserves the same tender love and care as your production code. Fore some reason test code is generally not held to the same standards as the production code. You should strive to keep your test code just as DRY and expressive as all other code. Make sure not to repeat tedious setup code or hard-to-read asserts. Use extract method if you can, and split the tests to make sure they are only testing one thing.</p> <p>The test is more important than the prettiness of your public interfaces. The consequence of this is that it is better to add test specific methods/properties to your objects and to <br />increase the visibility of private/protected methods than to let your tests suffer. However, every time you are tempted to promote a private method to public for testability you should stop and think if it is not really an inner class that is trying to escape.</p> <p>For some reason, test code is generally not shipped with the production code. Why not use the fact that you have an extremely powerful diagnostic tool available and ship the tests when you release, together with some light weight test runner software. By doing that you can drastically improve your ability to diagnose problems that occur on the systems on which your software is being deployed.</p> <p><br /><strong>Make sure you can trust your test suite</strong></p> <p>Once you have worked on a code base that is covered by a test suite you trust, no matter the size, you tend to get hooked on the feeling of freedom to refactor and change the code. When you reach this point of trust you can start reaping some of the most powerful benefits of a test driven code base. The flexibility of the code allows you to do whatever you please and get immediate feedback. Your QA process is drastically shortened which makes frequent releases and deployment a breeze.</p> <p>Achieving this kind of trust for your test suite is not something you do easily. It takes hard work and discipline. A good way to do keep the trust level high is to follow the three laws of TDD (see &quot;TDD is hard, get used to it&quot; below).</p> <p><br /><strong>Don&#39;t ask for permission, just do it</strong></p> <p>A colleague of mine has the philosophy that it is easier to ask for forgiveness than for permission. This is especially true when it comes to TDD. If you start asking management and colleagues if you should do TDD or not, in an organization that is not that progressive, you will probably hear the usual anti-TDD arguments. If you don&#39;t want to take the fight, the easies way to resolve this issue is to not ask in the first place.</p> <p>Most likely, neither colleagues nor management will explicitly tell you NOT to do TDD if you just do it as a natural part of your development process. If your team/management is extreme enough to not want test code in the code base, or something crazy like that, just keep the test code locally on your machine instead of checking it in, or simply throw it away after letting it drive your production code.</p> <p><br /><strong>Tests are both documentation and functional spec</strong></p> <p>By letting acceptance tests and unit tests drive the implementation of requirements, as well as the modifications made due to changed requirements, your tests always reflect the current functional spec. Your acceptance test provide an executable functional specification. If you are using a tool such as FitNesse that specification will be easily understood by the customer. In the best of worlds, this specification is not only easily understood by the customer, the customer is also able to create and update specs. Even if that is not the case, the acceptance tests provide a powerful basis from which customer and developer can cooperate to evolve the spec.</p> <p>The unit tests provide an extensive documentation of the actual implementation of the requirements. This documentation will always be up to date, which is not usually the case with traditional documentation. Code examples very efficiently describe the usage and behaviour of classes/methods etc, much more efficiently than written text. Unit tests are a very useful mixture of executable specification and usage examples. </p> <p><br /><strong>True TDD is hard, get used to it</strong></p> <p>To get all the benefits of TDD, you always have to push yourself to follow the three rules of TDD:</p> <p>1. You are not allowed to write any production code unless it is to make a failing unit test pass.<br />2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.<br />3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test. </p> <p>This is hard. Really hard. Testing legacy code is hard. Testing the GUI is hard. Convincing colleagues is hard. Keeping your tests fast is hard.</p> <p>There are no simple solutions which magically solve these hard issues. They are hard because they are hard. The only thing you can do is to do the best that you can, and do it over and over again.<br /></p>Fri, 19 Mar 2010 21:05:53 +01002010-03-19T21:05:53+01:00232http://www.erikojebo.se/Code/Details/232webmaster@erikojebo.seEmacsify Your Windows Keyboard Navigation<p>If you are an Emacs user you might miss the Emacs navigation keybindings while working in Windows. The Emacs navigation keybindings let you navigate through text without moving your hands from the touch typing position. This can be quite nice, especially when working on a laptop which has home/end and the arrows in funky locations on the keyboard.</p> <p>Since Emacs shortcuts are extremely ctrl heavy, many Emacs users rebind their caps lock key to be another ctrl key. To emulate the Emacs bindings, while at the same time keeping all the default Windows keybindings, I created a little AutoHotKey script that maps CapsLock+n/p etc to up arrow/down arrow etc.</p> <p><a href='http://www.autohotkey.com/'>AutoHotKey</a> is a small but extremely powerful application for doing stuff with keybindings in Windows. If you have not checked it out, I strongly suggest that you do. It can do some really usefull things.</p> <p>The tiny AutoHotKey script I wrote goes as follows:</p> <pre class='prettyprint'>CapsLock &amp; n::Send {Down}<br />CapsLock &amp; p::Send {Up}<br />CapsLock &amp; e::Send {End}<br />CapsLock &amp; a::Send {Home}<br />CapsLock &amp; f::Send {Right}<br />CapsLock &amp; b::Send {Left}</pre> <p>Now you can navigate easily in any Windows application, even explorer, while still keeping your system pair programming friendly by keeping all standard keybindings unchanged.</p>Sun, 07 Mar 2010 22:17:12 +01002010-03-07T22:17:12+01:00210http://www.erikojebo.se/Code/Details/210webmaster@erikojebo.seQuick Tip: Kill Multiple Processes in Windows<p>Linux folks have probably used the <em>kill</em> command from time to time. The Windows equivalent is <em><a href='http://technet.microsoft.com/en-us/library/bb491009.aspx'>taskkill</a></em>.</p> <p>To kill all instances of the WINWORD process, use the following command.</p> <pre class='prettyprint'>taskkill /im WINWORD* /f</pre> <p>The im switch specifies the image name, and the f switch tells taskkill to forcefully terminate the processes.</p>Fri, 29 Jan 2010 12:55:05 +01002010-01-29T12:55:05+01:00209http://www.erikojebo.se/Code/Details/209webmaster@erikojebo.seCan Your Site be Used by a Visually Impaired User?<p>One of the benefits of using semantic HTML is that your site becomes more accessible to users with visual disabilities. This is because of the fact that screen reader applications make some assumptions about the content of your site depending on what tags you use in your markup. These assumptions are based on the semantic meaning of the HTML tags, which is why semantic markup fits screen readers like a glove.</p> <p>Writing semantic markup is not always that easy, though. The choice of how to mark up the content of your site can be difficult. To check the accessibility of your site and make these choices easier you can run your site through a screen reader and see which markup makes the most sense.</p> <p>An excellent way to see how a screen reader would present the contents of your site is to use the Firefox extension <a href='http://www.standards-schmandards.com/projects/fangs/faq/'>Fangs</a>. This extension can generate the text that would be read by the screen reader <a href='http://en.wikipedia.org/wiki/JAWS_%28screen_reader%29'>Jaws</a>.</p> <p>When you have installed Fangs, all you have to do is right click and choose &quot;View Fangs&quot;, and tada, there is the screen reader output in plain text.</p>Mon, 21 Dec 2009 15:58:19 +01002009-12-21T15:58:19+01:00208http://www.erikojebo.se/Code/Details/208webmaster@erikojebo.seUsing Extension Methods to Refactor Your Tests<p>In a recent <a href='http://www.iamnotmyself.com/2009/10/23/TDDKataCalculatorDemonstration.aspx'>Katacast</a>, Bobby Johnsson gave an excellent example of how C# extension methods can be used to create simplistic and readable test cases. He was performing the <a href='http://katas.softwarecraftsmanship.org/?p=80'>StringCalculator</a> kata which is a kata where you write a calculator class that can calculate the sum of the integers in a given input string, for example &quot;1,3,5&quot;.</p> <p>Without extension methods an ordinary test case for this kata might look something like this:</p> <pre class='prettyprint'>[Test]<br />public void Add_GivenTwoSingleDigitNumbers_ShouldReturnTheSum()<br />{<br /> var calculator = new StringCalculator();<br /> int sum = calculator.Add(&quot;1,2&quot;); <p> Assert.AreEqual(3, sum);<br />}</pre></p> <p>This is a quite ordinary test, and there is no problem with the test itself. However if you have 15 other tests with the same structure there is room for some refactoring. One approach to refactor the test could be to simply create a helper method, like so:</p> <pre class='prettyprint'>private void VerifySum(string input, int expectedSum)<br />{<br /> var calculator = new StringCalculator();<br /> int actualSum = calculator.Add(input); <p> Assert.AreEqual(expectedSum, actualSum);<br />}</pre></p> <p>This would give a test case that look like this:</p> <pre class='prettyprint'>[Test]<br />public void Add_GivenTwoSingleDigitNumbers_ShouldReturnTheSum()<br />{<br /> VerifySum(&quot;1,2&quot;, 3);<br />}</pre> <p>If you instead use the extension method refactoring you could create an extension method that looked something like this:</p> <pre class='prettyprint'>public static class StringExtensions<br />{<br /> public static void ShouldAddUpTo(this string input, int expectedSum)<br /> {<br /> var calculator = new StringCalculator(); <p> int actualSum = calculator.Add(input);</p> <p> Assert.AreEqual(expectedSum, actualSum);<br /> }<br />}</pre></p> <p>This approach gives a test that looks like this:</p> <pre class='prettyprint'>[Test]<br />public void Add_GivenTwoSingleDigitNumbers_ShouldReturnTheSum()<br />{<br /> &quot;1,2&quot;.ShouldAddUpTo(3);<br />}</pre> <p>I must say that the extension method approach gives a much more fluent and readable syntax, with a very Ruby-esque feeling to it. The back side of using extension methods is that it can be confusing for the reader, since you must have included the namespace of the extensions for them to show up, and you cannot resolve those namespaces as easily as you could with standard method calls. In my opinion, the example above is an excellent use of extension methods when the extension method is located in the same place as the tests, since that makes it obvious what is going on.</p> <p>Extension methods is truly an awesome feature of C#, when used correctly.</p>Fri, 11 Dec 2009 11:01:50 +01002009-12-11T11:01:50+01:00207http://www.erikojebo.se/Code/Details/207webmaster@erikojebo.seVerifying IEnumerable<T> HasMany Mapping with Fluent NHibernate<p>Fluent NHibernate has built in functionality to make it very simple to verify that your entities are mapped correctly. To test the mappings of an entity you can use the PersistenceSpecification class:</p> <pre class='prettyprint'>[Test]<br />public void CanMapPost()<br />{<br /> using (ISession session = SessionFactorySingleton.OpenSession())<br /> {<br /> new PersistenceSpecification&lt;Post&gt;(session)<br /> .CheckProperty(p =&gt; p.Name, &quot;Post Name&quot;)<br /> .CheckProperty(p =&gt; p.DateTime, new DateTime(2009, 12, 6))<br /> .CheckProperty(p =&gt; p.Comment, &quot;Comment goes here&quot;)<br /> .CheckProperty(p =&gt; p.Amount, 250.90)<br /> .VerifyTheMappings();<br /> }<br />}</pre> <p>This creates an instance of the Post class, saves it to the database, retrieves it again and verifies that its properties have the correct values.</p> <p>The PersistenceSpecification class also includes functionality to verify <em>HasMany</em> and <em>References</em> mappings. For example, you could verify a HasMany mapping like this:</p> <pre class='prettyprint'>new PersistenceSpecification&lt;Post&gt;(session)<br /> .CheckList(p =&gt; p.Tags, tags)</pre> <p>This works well if you have a setter for your collection. However, if you expose the collection as an IEnumerable&lt;T&gt; and use a private backing field, for example by mapping it as Access.CamelCaseField, it will not work. This is because of the fact that Fluent NHibernate does not know how to add the items to the collection.</p> <p>This issue has been fixed in later releases of Fluent NHibernate. Now you can verify your IEnumerable&lt;T&gt; property mapping using the CheckEnumerable method of the PersistenceSpecification class. A complete verification of the post class above could look like this:</p> <pre class='prettyprint'>public void CanMapPost()<br />{<br /> using (ISession session = SessionFactorySingleton.OpenSession())<br /> {<br /> var tags = new List&lt;Tag&gt;()<br /> {<br /> new Tag(&quot;tag 1&quot;),<br /> new Tag(&quot;tag 2&quot;)<br /> }; <p> new PersistenceSpecification&lt;Post&gt;(session)<br /> .CheckProperty(p =&gt; p.Name, &quot;Post Name&quot;)<br /> .CheckProperty(p =&gt; p.DateTime, new DateTime(2009, 12, 6))<br /> .CheckProperty(p =&gt; p.Comment, &quot;Comment goes here&quot;)<br /> .CheckProperty(p =&gt; p.Amount, 250.90)<br /> .CheckEnumerable(p =&gt; p.Tags, (p, t) =&gt; p.AddTag(t), tags)<br /> .VerifyTheMappings();<br /> }<br />}</pre></p>Mon, 07 Dec 2009 21:15:59 +01002009-12-07T21:15:59+01:00205http://www.erikojebo.se/Code/Details/205webmaster@erikojebo.seFree .NET Profiler<p>If you don&#39;t have access to a commercial profiler, such as JetBrains&#39; dotTrace or ANTS Profiler, take a look at <a href='http://www.ijw.co.nz/index.htm'>IJW Profiler for .NET &amp; Java</a>. It is a basic, but quite nice profiler which is free and the source code is available from the link above.</p>Fri, 06 Nov 2009 20:04:47 +01002009-11-06T20:04:47+01:00202http://www.erikojebo.se/Code/Details/202webmaster@erikojebo.seStarting WPF Application with Window from another Assembly<p>When you create a WPF application you get the standard App.xaml and Window.xaml files. The App.xaml file has the StartupUri attribute set to Window.xaml. This is all well and fine, and if you rename Window.xaml or create another window that you want to use as the startup window you can simply modify the value of the StartupUri attribute.</p> <p>If you are want to use a window defined in another assembly as the startup window it becomes a little trickier. The simplest way to accomplish this is to override the Application.OnStarupt method. In you override you can instantiate the window you want to show and then simply call Show() on it. For example:</p> <pre class='prettyprint'><br />public partial class App : Application<br />{<br /> protected override void OnStartup(StartupEventArgs e)<br /> {<br /> // Create an instance of the window that is located in another assembly<br /> var customWindow = new CustomWindow(); <p> customWindow.Show();<br /> }<br />}<br /></pre></p> <p>You could also create your own main method, and manually create and run your application.</p> <p>The first step is to create your own main method is to change the Build Action property of your App.xaml file from ApplicationDefinition to Page, which stops Visual Studio from creating a main method for you. The second step is to add a main method to your App class:</p> <pre class='prettyprint'><br />public partial class App : Application<br />{<br /> [STAThread]<br /> public static void Main()<br /> {<br /> Application app = new Application(); <p> // Create an instance of the window that is located in another assembly<br /> var customWindow = new CustomWindow();</p> <p> app.Run(customWindow);<br /> }<br />}<br /></pre></p> <p>Now you are all set. Compile, run and enjoy!</p>Sun, 18 Oct 2009 19:26:05 +02002009-10-18T19:26:05+02:00198http://www.erikojebo.se/Code/Details/198webmaster@erikojebo.seAvoiding Flicker at Page Load when Using the Ajax Control Toolkit ModalPopup Extender Control<p>If you are using the ModalPopup extender control from the Ajax Control Toolkit you might have noticed that the popup flickers when the page is loaded. This is because of that there is a slight delay from when the page is rendered to when the javascript of the extender control hides the popup.</p> <p>To get around this you need to set the display style of the popup element. It seems like you have to set the style directly at the control, and not from an external style sheet.</p> <p>To set the display style to none for your popup element, add the following attribute:</p> <pre class='prettyprint'>style=&quot;display: none&quot;</pre>Thu, 15 Oct 2009 20:35:28 +02002009-10-15T20:35:28+02:00177http://www.erikojebo.se/Code/Details/177webmaster@erikojebo.seDisplaying Modal Dialogs with WPF<p>In WPF you can show a modal dialog with the ShowDialog method, just as in WinForms. However, it does not behave quite right. For example, if you display a modal dialog, then swich to another application, and switch back to you application the main form will be shown and not the modal dialog.</p> <p>To get the expected behaviour you have to set the Owner property of the dialog before calling ShowDialog():</p> <pre class='prettyprint'><br />var dialog = new FooDialog();<br />dialog.Owner = this;<br />dialog.ShowDialog();<br /></pre> <p><br />This makes the modal dialog the topmost window of the application.</p>Thu, 13 Aug 2009 12:54:41 +02002009-08-13T12:54:41+02:00176http://www.erikojebo.se/Code/Details/176webmaster@erikojebo.seNHibernate: Using Paging with Join Queries<p>When you write queries that use joins in NHibernate, most of the time the result you get back is not unique at the root entity level. This is usually not a problem since you can stick a DistinctRootEntityResultTransformer on the query and get a nice unique result set.</p> <p>However, if you want to page the result based on the root entities there is a problem. Since the DistinctRootEntityResultTransformer operates on the result set that comes back from the database and the paging modifies the database query the paging is made before the data is filtered to be unique. This leads to a result that you probably don\&#39;t want.</p> <p>To combine paging with joined queries you can split the query into two separate queries. The first query fetches the root entities based on some criteria and pages the result. The second query performs the join for the entities returned from the previous query, without caring about paging or uniqueness. Since NHibernate makes sure an entity is mapped to exactly one instance in a session the result from the second query loads data into the instances returned from the first query. </p> <p>Here is an example:</p> <pre class='prettyprint'>// Assuming two classes: Post and Comment. A post has many comments. <p>public IEnumerable&lt;Post&gt; GetPostsByPageWithCommentsLoaded(int pageIndex, int pageSize)<br />{<br /> using (ISession session = SessionFactory.OpenSession())<br /> {<br /> // Load the posts using paging<br /> var posts = session.CreateCriteria(typeof(Post))<br /> .AddOrder(Order.Desc(&quot;DateTime&quot;))<br /> .SetFirstResult(pageIndex * pageSize)<br /> .SetMaxResult(pageSize)<br /> .List&lt;Post&gt;();</p> <p> // Eager load the comments for all the posts returned in the previous query<br /> // Restrictions.In() is used to avoid fetching unnecessary data<br /> // temporaryPosts is never used. The point of the query is to load data into<br /> // the instances returned from the previous query<br /> var temporaryPosts = session.CreateCriteria(typeof(Post))<br /> .SetFetchMode(&quot;Comments&quot;, FetchMode.Eager)<br /> .Add(Restrictions.In(&quot;Id&quot;, posts.Select(p =&gt; p.Id).ToArray()))<br /> .List&lt;Post&gt;();</p> <p> return posts;<br /> }<br />}</pre></p>Sun, 26 Jul 2009 17:45:34 +02002009-07-26T17:45:34+02:00172http://www.erikojebo.se/Code/Details/172webmaster@erikojebo.seThe First Lesson of Scrum<p>The other day I learned one of the most important Scrum techniques: Always take a new sticky by gently pulling it of the pad from left to right, <em>not</em> by yanking it upwards as you would normally do. This ensures maximum stickiness, so that the notes last the whole sprint.</p>Wed, 17 Jun 2009 22:12:30 +02002009-06-17T22:12:30+02:00167http://www.erikojebo.se/Code/Details/167webmaster@erikojebo.seC#: Creating a Cursor from a Resource<p>A simple way to use custom cursor icons in .NET applications is to include the .cur file in the project resources (project properties -&gt; resources -&gt; add resource from file).</p> <p>A cursor can then be created like so:</p> <p><br /><pre class='prettyprint'><br />Cursor cursor = new Cursor(new System.IO.MemoryStream(<br /> Properties.Resources.MyCursorName));<br /></pre></p>Tue, 02 Jun 2009 12:28:16 +02002009-06-02T12:28:16+02:00165http://www.erikojebo.se/Code/Details/165webmaster@erikojebo.seTests as Documentation<p>A couple of weeks ago there was an internal seminar at work about Test-Driven Development. The seminar was held by one of the senior devs who has been doing TDD for a few years. As a part of the seminar he presented a few levels of TDD practice, corresponding to different &quot;aha&quot; moments.</p> <p>One of the &quot;aha&quot; moments was the realization that tests are documentation. When I heard this I thought for a while if I had had this &quot;aha&quot; moment yet or not. I could see how tests could be considered documentation, but that was not how I primarily saw it.</p> <p>However, about a week ago I actually had that &quot;aha&quot; moment, and then once again today. Last week I had to go through some complex import/export code that I had worked with maybe a month earlier. Since I had worked on the code not that long ago I thought that I would remember how the code was supposed to work. That was not the case. I found myself wondering if a specific behaviour was by design or if it was a bug. After a few seconds of thinking I realized that the tests could tell me (&quot;aha!&quot;). A quick look in the test fixture and I knew that the code worked according to spec.</p> <p>Today I was setting up a new hobby project, and I was rewriting some old configuration code for Fluent-NHibernate, since that project is changing quite rapidly. I was writing the database configuration and tried to find the correct way to specify &quot;integrated security&quot; as login for a MS SQL Server connection. Intellisense showed that there was a &quot;TrustedConnection()&quot; method of the configuration object, but it had no XML comments.</p> <p>A quick google search turned up the Fluent NHibernate SVN repository, so I though to myself that I might as well take a quick look in the source. The search function at google code found the unit test for TrustedConnection().</p> <p>The test look as follows:</p> <pre class='prettyprint'><br />[Test]<br />public void ConnectionString_for_trustedConnection_is_added_to_the_configuration()<br />{<br /> MsSqlConfiguration.MsSql2005<br /> .ConnectionString(c =&gt; c<br /> .Server(&quot;db-srv&quot;)<br /> .Database(&quot;tables&quot;)<br /> .TrustedConnection())<br /> .ToProperties().ShouldContain(&quot;connection.connection_string&quot; ,<br /> &quot;Data Source=db-srv;Initial Catalog=tables;Integrated Security=True&quot;);<br />}</pre> <p>Integrated security it is then. Unit tests to the rescue! (&quot;aha!&quot;)</p> <p>However, when it comes to public libraries and API:s like Fluent NHibernate I really think that there is no excuse for not writing XML comments. Documentation in Intellisense is such a productivity boost that it is a must have for all API:s.</p> <p>From now on, I can say without a doubt that I believe that unit tests are documentation.</p>Sun, 17 May 2009 21:22:27 +02002009-05-17T21:22:27+02:00150http://www.erikojebo.se/Code/Details/150webmaster@erikojebo.seProgrammatically Add Conditional Formatting Formula to a Cell in Excel<p>The following snippet demonstrates how to add a conditional formatting rule to a cell. The rule specifies that the cell color is set to red if the contents of the cell has a length in characters longer than 3.</p> <p>The snippet assumes an existing worksheet instance.</p> <pre class='prettyprint'>var range = (Excel.Range)worksheet.Cells[rowNo + 2, colNo + 1]; <p>Excel.FormatCondition condition = (Excel.FormatCondition)range.FormatConditions.Add(<br /> Microsoft.Office.Interop.Excel.XlFormatConditionType.xlExpression, <br /> Type.Missing, &quot;=LEN(R[0]C[0])&gt;3&quot;, Type.Missing, Type.Missing, <br /> Type.Missing, Type.Missing, Type.Missing);</p> <p>condition.Interior.ColorIndex = 3; // Red<br /></pre></p>Wed, 15 Apr 2009 14:30:36 +02002009-04-15T14:30:36+02:00148http://www.erikojebo.se/Code/Details/148webmaster@erikojebo.se"Could not find installable ISAM" when Reading Excel Document Programatically with Microsoft Jet<p>This generic error message is often an indication of a syntax error in the connection string. In my case the error occurred when I tried to add a second Extended Property, like so:</p> <pre class='prettyprint'>var connection = new System.Data.OleDb.OleDbConnection(<br /> &quot;Provider=Microsoft.Jet.OLEDB.4.0;&quot; +<br /> &quot;Data Source=&quot; + path + &quot;;&quot; +<br /> &quot;Extended Properties=Excel 8.0;HDR=No;&quot;);</pre> <p>The problem turned out to be that the multiple extended properties were not quoted. A single extended property does not need quoting, but multiple do.</p> <p>The fix was then simply to add escaped quotes:</p> <pre class='prettyprint'>var connection = new System.Data.OleDb.OleDbConnection(<br /> &quot;Provider=Microsoft.Jet.OLEDB.4.0;&quot; +<br /> &quot;Data Source=&quot; + path + &quot;;&quot; +<br /> &quot;Extended Properties=\\&quot;Excel 8.0;HDR=No;\\&quot;&quot;);</pre>Tue, 07 Apr 2009 15:32:17 +02002009-04-07T15:32:17+02:00141http://www.erikojebo.se/Code/Details/141webmaster@erikojebo.seSQL Server Express Profiler<p>At the moment I am comparing a few different object/relational mapping tools (ORM:s), namely Linq to SQL, Entity Framework, NHibernate and MyGeneration Doodads. Both Linq to SQL and NHibernate have excellent features for writing all generated SQL statements to the console. Unfortunately it is not quite as easy in the other three.<br /><br />Today I needed to debug a Subsonic query but could not find any information about how to output the generated SQL to the console. Finally I gave up and thought to myself that I would have been really great if Microsoft had included a SQL profiler in the SQL Server Express Editions. It then occurred to me that I had never really tried to find another SQL profiler, so a quick google search found <a href='http://sqlprofiler.googlepages.com/'>SQL Server 2005/2008 Express Profiler</a> which is an open source profiler for MS SQL Server Express, as you probably guessed.<br /><br />The project home page is quite spartan and the GUI isn&#39;t the best ever, but it works. First of all you have to select which events you want to see in the trace. When that is set up you just press play, do you SQL-magic and watch the queries appear in the trace.<br /><br />So, if you are using SQL Server Express it is absolutely worth a try.</p>Fri, 20 Feb 2009 15:17:10 +01002009-02-20T15:17:10+01:00122http://www.erikojebo.se/Code/Details/122webmaster@erikojebo.seCompiling Fluent-NHibernate for NHibernate 2.1 without Using Rake<p>Currently Fluent-NHibernate references NHibernate version 2.0.0.4 per default. At the Fluent-NHibernate page the suggested way of compiling Fluent-NHibernate for NHibernate 2.1 is to do as follows:<br /><br /><em>To enable NH 2.1 support, run the rake task (rake use_nhib_21) and then build FluentNHibernate.Versioning.NH21. To go back to NH 2.0.1, run the rake task (rake use_nhib_201) and build FluentNHibernate.Versioning.NH20</em><br /><br />To do this you obviously need to have rake installed. As a Windows user you can get rake by using the <a href='http://rubyinstaller.rubyforge.org/wiki/wiki.pl'>Ruby installer for Windows</a>.<br /><br />However, if you do not want to install rake, you can still change the references by hand and compile new Fluent-NHibernate binaries.<br /><br />First of all, get the latest source code by doing a check out of the latest code from the <a href='http://code.google.com/p/fluent-nhibernate/'>Fluent-NHibernate repository</a>.<br /><br />For each project in the solution, remove the references to NHibernate and NHibernate.Linq, then add references to the latest dlls. These dlls are located in the sub-folder Tools/NHibernate/nhib2.1/ in your check out folder.<br /><br />Now you can find a shining, new FluentNHibernate.dll, that uses NHibernate 2.1, in one of the bin folders.</p>Fri, 23 Jan 2009 14:31:36 +01002009-01-23T14:31:36+01:00120http://www.erikojebo.se/Code/Details/120webmaster@erikojebo.seSCTP Socket Programming<p>If you are interested in socket programming you really should take a look at the Stream Control Transmission Protocol (<a href='http://en.wikipedia.org/wiki/SCTP'>SCTP</a>). It is a new transport protocol that can be used in about the same way as both UDP and TCP, depending on which kind of SCTP socket that is used.</p> <p>SCTP has two kinds of sockets, one-to-one sockets and one-to-many sockets. One-to-one sockets are very similar to classic TCP sockets, and are also known as TCP style sockets. One-to-many sockets are similar to UDP sockets, and as you probably can guess, are also known as UDP style sockets.</p> <p>SCTP one-to-one sockets are actually so similar to TCP sockets that you can convert a TCP application to SCTP just by changing the socket call:</p> <pre class='prettyprint'>// TCP socket<br />int tcp_fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); <p>// SCTP one-to-one socket<br />int sctp_fd = socket(AF_INET, SOCK_STREAM, IPPROTO_SCTP); </pre></p> <p>If you want to dig deeper into SCTP I can recommend <a href='http://unpbook.com/'>Unix Network Programming, 3rd</a> ed. by Stevens, R. The book covers both basic and more advanced parts of SCTP, with useful examples. If you do some kind of socket programming, you should own a copy of this book, no matter what protocol you currently use. The only problem with the SCTP part of the book is that the specification has changed slightly since the book was printed, so I would recommend updated man pages as API reference. Personally I prefer the <a href='http://docs.sun.com/app/docs/coll/40.10'>Sun Solaris man pages</a>, which are available both as web pages and PDF.</p> <p>For RFC:s, drafts and documents <a href='http://www.sctp.org'>SCTP.org</a> is the place to go.</p>Sat, 17 Jan 2009 16:44:03 +01002009-01-17T16:44:03+01:0098http://www.erikojebo.se/Code/Details/98webmaster@erikojebo.seRecompiling VirtualBox<p>If you run VirtualBox with Ubuntu as your host OS, you will probably run in to some problems when there is a kernel upgrade. To fix this you just have to do a recompile of VirtualBox, like this:</p> <pre class='prettyprint'>killall VirtualBox<br />sudo /etc/init.d/vboxdrv setup</pre>Tue, 09 Dec 2008 10:21:15 +01002008-12-09T10:21:15+01:0084http://www.erikojebo.se/Code/Details/84webmaster@erikojebo.seBatch Image Resize/Rotate in Ubuntu<p>Resizing and rotating images can be a really time consuming and boring task that just begs to be automated. If you are running Ubuntu, this is easily achieved by using <a href='http://imagemagick.org/script/index.php'>Imagemagick</a>.</p> <p>To install imagemagick, just do an ordinary apt-get, like this:</p> <pre class='prettyprint'>sudo apt-get install imagemagick</pre> <p>Now you are ready to start batch processing your images. Here are some examples of useful commands:</p> <p>To resize an image to a specific size:</p> <p><br /><pre class='prettyprint'>convert foo.jpg -resize 640x480 foo2.jpg</pre></p> <p><br />To resize using a percentage:</p> <p><br /><pre class='prettyprint'>convert foo.jpg -resize 50% foo2.jpg</pre></p> <p><br />To resize an image to a specific width, keeping aspect ratio:</p> <p><br /><pre class='prettyprint'>convert foo.jpg -resize 640 foo2.jpg</pre></p> <p><br />To resize an image to a specific height, keeping aspect ratio:</p> <p><br /><pre class='prettyprint'>convert foo.jpg -resize x480 foo2.jpg</pre></p> <p><br />To rotate an image:</p> <p><br /><pre class='prettyprint'>convert foo.jpg -rotate 90 foo2.jpg</pre></p> <p><br />These commands can be chained together like so:</p> <p><br /><pre class='prettyprint'>convert foo.jpg -rotate 90 -resize 640 foo2.jpg</pre></p> <p><br />To operate on multiple files, file <a href='http://en.wikipedia.org/wiki/Glob_(programming)'>globbing</a> can be used, like this</p> <p><br /><pre class='prettyprint'>convert *.jpg -rotate 90 -resize 500 rotated_%03d.jpg</pre></p> <p><br />The %03d means three incrementing digits. So in this example, the new file names will be rotated_000.jpg, rotated_001.jpg etc.</p>Fri, 05 Dec 2008 09:35:57 +01002008-12-05T09:35:57+01:0079http://www.erikojebo.se/Code/Details/79webmaster@erikojebo.seUnit Testing Selection on a Windows Forms ListView Control<p>This is kind of an edge case, but it just stole about two hours of my day, so I thought it might be of interest for someone else who stumbles upon this problem.</p> <p>In the application I am currently working on I am subclassing a ListView to make it handle strongly typed tags, and for example directly get a strongly typed instance of the tag of the currently selected item. To test my implementation I programmatically created an instance of my ListView subclass, added items and added an item to the selection using both:</p> <pre class='prettyprint'>Item.Selected = true</pre> <p> and </p> <p><br /><pre class='prettyprint'>listView.SelectedIndices.Add(...)</pre></p> <p>The problem was that this had no effect at all on the ListView\&#39;s SelectedIndices and SelectedItems collections.</p> <p>I tried doing the same thing on a ListView in one of the forms of my appliation and that worked, but it did not work if I instantiated the ListView directly in the code. After some googeling I finally found a comment to the MSDN documentation on the ListViewItem.Selected Property, saying that the Selected property could not be trusted if the ListView had not been drawn. I did some quick testing myself and confirmed that the problem indeed disappeared if the ListView was drawn.</p> <p>So, the solution to the problem was to do something like this:</p> <pre class='prettyprint'><br /> [Test]<br /> public void CanGetSelectedItems()<br /> {<br /> // Sadly, this is needed to make the ListView handle selection properly.<br /> // If the ListView is instantiated directly SelectedIndices, SelectedItems etc<br /> // do not update when modified.<br /> var f = new DummyForm(listView);<br /> f.Show(); <p> listView.SelectedIndices.Add(0);</p> <p> Assert.AreEqual(1, listView.SelectedIndices.Count);</p> <p> ICollection&lt;string&gt; items = listView.SelectedItems;</p> <p> Assert.AreEqual(1, items.Count);<br /> Assert.IsTrue(items.Contains(strings[0]));<br /> }</p> <p> private class DummyForm : Form<br /> {<br /> public DummyForm(ListView listView)<br /> {<br /> this.WindowState = FormWindowState.Minimized;<br /> this.ShowInTaskbar = false;<br /> this.Controls.Add(listView);<br /> }<br /> }<br /></pre></p>Thu, 20 Nov 2008 10:55:38 +01002008-11-20T10:55:38+01:0078http://www.erikojebo.se/Code/Details/78webmaster@erikojebo.seRhinoMocks - Expect that Method is Never Called At All<p>If you want to set up an expectation that a method is never called at all, and not just that it is never called with a specific parameter, you can do like this:</p> <pre class='prettyprint'>MockRepository mocks = new MockRepository();<br />Foo mockFoo = mocks.StrictMock&lt;Foo&gt;(); <p>// For void methods:<br />mockFoo.Bar();<br />LastCall.IgnoreArguments().Repeat.Never();</p> <p>// For non-void methods:<br />Expect.Call(mockFoo.Baz(1)).Return(null).Repeat.Never();<br />LastCall.IgnoreArguments().Repeat.Never();</p> <p>mockFoo.Replay();</p> <p>// Do stuff</p> <p>mockFoo.VerifyAllExpectations();</pre></p>Wed, 19 Nov 2008 13:35:23 +01002008-11-19T13:35:23+01:0077http://www.erikojebo.se/Code/Details/77webmaster@erikojebo.seFluent-NHibernate - Unknown Entity Class Exception<p>If you are using Fluent NHibernate and you get the exception:</p> <pre class='prettyprint'>NHibernate.MappingException: Unknown entity class: Foo.Bar.Baz</pre> <p>Make sure your mapping file (BazMap in this example) is public.</p> <p>If you still get the exception, make sure you are loading you class maps correctly. It should look something like this:</p> <pre class='prettyprint'>Configuration configuration = new Configuration().Configure(); <p>var persistenceModel = new PersistenceModel();<br />persistenceModel.addMappingsFromAssembly(typeof(Baz).Assembly);<br />persistenceModel.Configure(configuration);<br /> <br />var sessionFactory = configuration.BuildSessionFactory();</pre></p> <p>PersistenceModel is located in the FluentNHibernate namespace.</p>Tue, 18 Nov 2008 11:19:22 +01002008-11-18T11:19:22+01:0073http://www.erikojebo.se/Code/Details/73webmaster@erikojebo.seWindows PowerShell<p>This weekend I installed SQL Server 2008 Express and one of the requirements for the install was to have Windows PowerShell. I had heard <a href='http://www.hanselman.com/blog/'>Scott Hanselman</a> talk about it previously, but never tried it myself.<br /><br />Today I tried it out for the first time, and after about 5 seconds I decided that it was a great product. The simple reason was that I typed ls and hit return, and it listed the folder contents for me. That is so sweet.<br /><br /><img src='/images/upload/code_blog/powershell.jpg' /><br /><br />Previously I have had a small batch file named ls that did a dir, just to be able to list without having to think about it, but this is quite a bit nicer.</p>Fri, 14 Nov 2008 08:34:33 +01002008-11-14T08:34:33+01:0072http://www.erikojebo.se/Code/Details/72webmaster@erikojebo.seDebugging NUnit Unit Tests in Visual C# Express 2008<p>Since Visual C# Express does not allow add-in:s, and attach to process is not available, you have to roll your own simple test runner if you want to be able to debug your unit test.<br /><br />Fortunately this is quite simple. The easiest way, in my opinion, is to add a new Console Application project to your solution. Add a reference to nunit-console-runner.dll, and then add the following code to the main of the new console app:<br /><br /><img src='/images/upload/code_blog/nunit-console-runner.jpg' /><br />Set the console app as start up project by right clicking the project in the solution explorer and choosing &quot;Set as start up project&quot;.<br /><br />You can now set break points in your test code and simply hit F5 to debug the tests.</p>Thu, 13 Nov 2008 15:42:23 +01002008-11-13T15:42:23+01:0070http://www.erikojebo.se/Code/Details/70webmaster@erikojebo.seCleaning Up Your Grub Boot Menu (Ubuntu)<p>As more and more kernel updates get installed on your system the grub menu tends to grow longer. This is easy to fix:</p> <p>Back up your grub menu file:</p> <pre class='prettyprint'>sudo cp /boot/grub/menu.lst /boot/grub/menu.lst.bak</pre> <p>Edit your menu file:</p> <pre class='prettyprint'>sudo gedit /boot/grub/menu.lst</pre> <p>Comment out any unwanted entries by adding a # in the beginning of each line that should not be displayed.</p> <p>Save the file, and when you reboot the grub menu should be nice and small again.</p>Wed, 12 Nov 2008 13:04:16 +01002008-11-12T13:04:16+01:0050http://www.erikojebo.se/Code/Details/50webmaster@erikojebo.seChanging the Default Vista Explorer Window Size<p>After using Vista for a month or two, the single most annoying thing was that Windows Explorer kept opening in its default size, which is far to small, at least if you have a decent resolution. I finally got tired of it and googled for a fix. It turns out that if you shift click the close button on the window, it remembers that size for the specific folder you are currently in. So what you have to do is to:<br /><br />- start explorer the way you normally do<br />- change the size of the window to the desired size, then<br />- shift click the close button in the top, right hand corner before you do anything else.<br /><br />It should now pop up with the right size, from now on.<br /><br />For more info, read this <a href='http://blog.maniacd.net/2007/05/15/steps-for-setting-vistas-default-explorer-window-size/'>blog post</a>.</p>Sat, 25 Oct 2008 07:45:22 +02002008-10-25T07:45:22+02:0034http://www.erikojebo.se/Code/Details/34webmaster@erikojebo.seC# Automatic properties in Visual Studio 2008<p>In .Net Framework 3.5 automatic properties were introduced. Automatic properties is a compiler feature that lets you write less code, and thereby increases your productivity. To add a property that only has a basic getter and setter, you only have to type:<br /><br /><img src='/images/upload/code_autoprop/prop.jpg' /><br />The compiler then fills in the rest at compile time, making the above line equivalent to:<br /><br /><img src='/images/upload/code_autoprop/oldprop.jpg' /><br /><br />To create a read-only property, just make the setter private:<br /><br /><img src='/images/upload/code_autoprop/propg.jpg' /><br />To simplify this even further, there are two code snippets, prop and propg, that generate template properties and let you fill in the name and type. The snippet &quot;prop&quot; generates a read/write property and &quot;propg&quot; generates a read-only property.<br /><br />To invoke a snippet, start typing the snippet name and it will appear in the IntelliSense-window.<br /><br /><img src='/images/upload/code_autoprop/vs2008prop.jpg' /><br />Pres tab once to auto-complete the snippet name, and tab again to expand the snippet name into the actual snippet.<br /><br /><img src='/images/upload/code_autoprop/vs2008prop_expanded.jpg' /></p>Wed, 01 Oct 2008 12:20:56 +02002008-10-01T12:20:56+02:009http://www.erikojebo.se/Code/Details/9webmaster@erikojebo.seHaskell<p>As you probably have noticed, functional programming has recently become the new, hot thing, and one of the languages that suddenly has gotten an upswing in publicity is <a href='http://en.wikipedia.org/wiki/Haskell_(programming_language)'>Haskell</a>.</p> <p>After listening to a few podcasts on functional programming I decided to play around with Haskell, and came up with the idea to use it for my <a href='http://projecteuler.net'>Project Euler</a> solutions. A quick search led to the Ubuntu Hardy Heron apt-packages <a href='http://packages.ubuntu.com/gutsy/ghc6'>ghc6</a> and <a href='http://packages.ubuntu.com/gutsy/haskell-mode'>haskell-mode</a>, which are the Glasgow Haskell Compiler and an Emacs add-on for Haskell syntax support.</p> <p>As always, the apt-installation was quick and simple, so in a matter of minutes I had a nice little Haskell development environment set up. The next step was to read some tutorials at the official <a href='http://www.haskell.org'>Haskell page</a> and shortly there after I was up and running.</p> <p><img src='/http://upload.wikimedia.org/wikipedia/commons/3/30/Fp_no_destructive_assignment.png' class='left' />I have to say that Haskell must be one of the most elegant languages I have encountered, so far. The pattern matching makes function definitions very simple and easy to read, but the detail I found the most appealing is the list definition syntax. For example:</p> <pre class='prettyprint'>[a | a &lt;- [1..], a [mod] 3 == 0]</pre> <p>This declares an infinite list of all integers that are evenly divisible by three, which is made possible by Haskell&#39;s <a href='http://en.wikipedia.org/wiki/Lazy_evaluation'>lazy evaluation</a>.</p> <p>A few days ago I was searching for new sites with video lectures, and to my surprise I found a great <a href='http://haskell.org/haskellwiki/Video_presentations#Introductions_to_Haskell'>Haskell video lecture</a> from O&#39;Reilly Open Source Convention. The lecture was given by <a href='http://en.wikipedia.org/wiki/Simon_Peyton_Jones'>Simon Peyton Jones</a> who is one of the guys behind the language. To my even greater surprise I then read that the <a href='http://se-radio.net'>Software Engineering Radio</a> team had just released a <a href='http://feeds.feedburner.com/~r/se-radio/~3/377820928/episode-108-simon-peyton-jones-functional-programming-and-haskell'>podcast</a> with Peyton Jones on functional programming and Haskell. Hopefully, more resources like these will pop up in the future.</p> <p>If you are up to it, I really recommend playing around with Haskell. As a starting point, I would suggest the short tutorials at <a href='http://www.haskell.org'>Haskell.org</a> and the video lectures mentioned above.</p>Thu, 04 Sep 2008 14:14:56 +02002008-09-04T14:14:56+02:002http://www.erikojebo.se/Code/Details/2webmaster@erikojebo.seUnit Testing in PHP<p>Recently I have begun to adopt a test driven style of development, trying to do it the <a href='http://en.wikipedia.org/wiki/Kent_Beck'>Kent Beck</a> way, i.e. test first. When I started planning the development of this site I remembered reading about <img src='/http://upload.wikimedia.org/wikipedia/commons/2/27/Crystal_Clear_mimetype_php.png' class='right' />two different unit testing frameworks for PHP, namely <a href='http://phpunit.de'>PHPUnit</a> and <a href='http://simpletest.org/'>Simpletest</a>. I decided to start with PHPUnit, and to my delight it was available as an apt package in my beloved <a href='http://www.ubuntu.com/'>Ubuntu</a>, so the installation went as smooth as only apt installations can.<br /><br />I tried it out on the shipped example, a unit test of an array, and was a bit disapointed with the interface. I had envisioned a PHP based framework that you simply ran in your browser, providing a nice HTML GUI. Instead it was a command line utility, that gave a quite sparse feedback, when running the tests. It had some nice features as report generation in various formats and testing code coverage, so it seemed powerful, but lacking the green bar/red bar feedback.<br /><br />So, Google to the rescue. I did a quick search and found the documentation for the PHPUnit API basically explaining a nice way of using the API to create a GUI wrapper for PHPUnit. Another search resulted in a source forge project called Cool (<a href='http://cool.sourceforge.net/'>&quot;Cool Object Orientated Library&quot;</a>, a little GNU-esque there... and it really is &quot;orientated&quot;). This turned out to be just what I was looking for. It was a php site, that provided a nice GUI for PHPUnit. The installation was painless, since all you had to do was to extract so that it is available from the web server.<br /><br />I started playing around with Cool and PHPUnit and, unfortunately, it soon became obvious that Cool did not handle test suites that well, or at least that it did not work properly in my set-up. Once again, to the Google-mobile, but no luck, so I decided to give Simpletest a try instead.<br /><br />As with Cool, all you had to do to install Simpletest was to extract it to a place that was reachable by your PHP code and you were done. I was glad to find that it shipped with a simple HTML GUI, so this was more in line with what I first had in mind. The documentation was well written and had good examples, so it did not take long to get up and running. Since everything felt solid I decided to use Simpletest for the project.<br /><br />Simpletest turned out to be really nice to work with and it has a nice Mocking-framework, that was easy to use. The only down side was the HTML reporter which was very basic, so after a while I decided to write a slightly more advanced reporter. The API used by the reporter classes was documented and since the source for the original HTML reporter was available it was quite a simple job to write a custom reporter.<br /><br />So, if you want a light weight, portable testing framework for PHP, I can recommend Simpletest. I will probably post my extended HTML reporter at this site in the near future, if anyone wants to try it out.</p>Thu, 04 Sep 2008 13:58:06 +02002008-09-04T13:58:06+02:00