Repositories are dead. Long live repositories.

Note: Here’s one from the backlog that I hope to get back to one day. I’m currently working on a GraphQL-based project, which makes this all superfluous, and I’ve had to shelve my work on this, but I want to put it out there anyway before I forget. Next time I’m working on a more traditional REST API, I hope to dust this off and really live in it.


Personally, I’ve been a holdout on “Team Repository” for perhaps a little too long now. I like the way repositories let me define a query once, in one place, and then share it with the rest of the codebase. I like that I can right-click on a query method and say “Find Usages”. I’m not so much a fan of them being the ultimate cutoff where the query gets materialized because that leads down a path of a hundred queries that only vary slightly from each other. I feel the time has come to design something better, and it’s absolutely dead simple.

Why Repositories?

Like everything else we do, we’re constantly trying to DRY out our code and increase reuse. As developers, we also like to create the “one true answer”, to invent the new shiny thing, to go down in history as the inventor of the last, best solution to all of our problems. Or maybe we just want to make our day to day development easier by solving a problem once and for all. This is the promise of repositories. I define my GetUserByEmailAddress query once, in one place, and then the rest of my code can simply reuse that golden implementation and never think about it again. We want to kill the problem and walk away.

Repositories let us do this. They let us define what operations the rest of the code is allowed to do, exactly which parameters are required to do that work, and they prevent other code from going rogue and inventing their own ad-hoc queries. They are supposed to keep us safe. But they grow out of control, and eventually we end up with gigantic repositories with hundreds of methods, most of which have only one caller, or we end up with a single method on that repository that hands back an IQueryable, and everything else in the application ends up calling that. Neither are ideal outcomes.

Why NOT repositories?

There has been a movement afoot for several years now to do away with repositories entirely, and while part of me screamed “Nooo! Not my repos! I love my repos! I just got them how I like them!”, I could see where they we’re coming from. Entity Framework’s DbSet<T> really IS a repository. It can load, save, and update items. If you define a repository class to sit on top of it, you’re really just wrapping one repo inside another.

In a typical layered application, there is usually some kind of service or “logic” layer that serves as the brains of the outfit. Sometimes this is implemented in the objects themselves in an old-school “object-oriented” way. That way lies the “Active Record” lands, and I personally don’t go over there if I can help it. I’m in the smart services camp until someone talks me out of it. As such, I like my service layer to be the one making the decisions, calling the shots, and generally being trusted to know what it’s doing.

A traditional repository takes some of that responsibility away from the service, and normally we’d say “Separation of concerns! Single responsibility principle!” and shout down the anarchist rebels who want to blend our carefully separated layers together… the heathens. I’d ask you to consider for a moment though that it’s the service’s job to ask for what it wants. It’s merely the repository’s job to go get it, while hiding the fact that there’s a database there behind the scenes. We’ve given the repos the power to tell the rest of the application what is possible.

So what’s this big idea then?

Prepare to have your mind blown, folks because this is going to shake the very foundations of your world. Are you ready for it? Here is comes… DbSet Extension Methods.

I know, right? Boom. Drop the mic and walk off the stage.
What? You don’t see it?
Okay, fine.

If a DbSet<User> is already a kind of repository, then all it’s missing is the “GetById” or “GetByEmailAddress” methods.

public static class UserDbSetExtensions
{
    public static IQueryable<User> GetByEmailAddress(this DbSet<User> dbSet, string emailAddress)
    {
        return dbSet.Where(x => x.EmailAddress == emailAddress);
    }
}

These are the reusable bits of code that we wanted all along, right? So why not just tack them onto the existing DbSet<T> implementations and call it a day? We get the best of both worlds. I have a convenient place to define frequently-used queries or operations, and my services still have direct access to the DbSet if needed.

I considered whether maybe inheriting from DbSet<T> and making a concrete UserDbSet would be the way to go, but there are casting problems within Entity Framework itself, or at least with EF Core the last time I checked. There will be code out there in libraries you want to use that might assume something will be an actual DbSet<User> and won’t know what a UserDbSet is. As long as the inheritance is correct, it should still work, but I found this issue on EF Core’s GitHub page, and that killed the idea for me immediately. Maybe it will be possible someday, but that was just too much uncertainty for me at this point.

In the meantime, adding well-known operations or queries as extension methods will have to do. Generally speaking, extension methods are used for extending things you don’t own, but in this case we’re prevented from writing a proper descendent class, so extension methods will have to do for the moment.

You could build these extension methods off of IQueryable<T> instead of DbSet<T>, which would allow you to stack them up to build queries out of components like Legos, but I don’t think that would be the best or most efficient way to do this. This method was meant to be a more direct replacement for repository methods, and they don’t work that way. You can’t chain traditional repo methods, so doing that here wouldn’t really fit that pattern anyway. What we’re after is an authoritative menu of hand-written, optimized, well-known queries. If you have a need for the order list, filtered by client Id, and a query already exists that does just that, then you can use that query and be on your way. If not, then it’s business as usual.

Posted in Computers and Internet | Tagged , , , , , , , | Leave a comment

.Net Source Generators

Code Generation

Code generation is a great way to apply patterns consistently across a solution or to create multiple similar classes based on some outside input file, or even other classes in the same solution. The tooling has changed over the years, but the result is the same… a lot of code that I don’t have to write by hand, or update when patterns change and evolve.

I’ve been generating source code since the early days of .Net 1.0 when we used some plug-ins to generate source code based on class diagrams drawn in Visio. It worked, but it was a one-way trip because .Net didn’t have partial classes yet. We left that behind pretty quickly for a tool from Borland called “Together” that kept our code and diagrams in sync as you worked directly on either half. It was pretty awesome, but Together kind of died out when the price suddenly skyrocketed for version 2.0.

Then, somewhere around the Visual Studio 2008 timeframe I discovered T4 templates. A coworker had written a command-line program to generate mockable testing shims around Linq to SQL contexts, and I adapted it to run inside a T4 template so that we wouldn’t have to keep running it by hand and copying the results into our solution by hand every time we made a change to our models. These templates were stored as artifacts in our solution, and added code directly to our projects which was a big win. I was hooked, and used this approach to eliminate other repetitive coding tasks such as creating Builder classes for testing support, which is still my primary use case today.

T4 templates became untenable with the release of .Net Core, or at least the way I was using them, because the reflection code that I relied on no longer worked. T4 templates run under the full framework and cannot reflect over .Net Core code, or at least they couldn’t at the time.

I moved on to Roslyn templates and a project called “Scripty.msbuild” to transform them at build time. The .csx files that drove this approach did their job very well, and I was happy with how they worked. I could keep my reflection-based code mostly intact because I was just swapping out the runner. Unfortunately, Scripty was more or less abandoned in 2017, and when it started having some problems as well, I needed a new alternative.

Next, I tried using TypeWriter to generate my Builder code. TypeWriter was meant for creating javascript equivalent DTOs out of .Net code, but it can be used to create any kind of text file. Still, it’s clearly focused on emitting javascript and TypeScript, so more work was needed to get it to generate .Net code.

Most recently, I was back to using T4 again, but without the reflection part. The T4 engine still works just fine. We just need a different input than reflection if we’re going to use it on a .Net Core project. My current project is using an external designer to create the business entities, much like Visio and Together did back in the day. The XML file from this designer is used as the input for the templates instead of reflection, so T4 seems to be working just fine.

That’s great, and solves our problem in newer code, but the writing on the wall seems clear that T4 templates are not a thing Microsoft is interested in developing any further, and I imagine they’ll just go out of style completely at some point now that we have .Net Source Generators.

Source Generators run inside Visual Studio and can generate code in real time as you work, which makes them a lot easier to work with than the systems that came before them. They’re implemented as Visual Studio analyzers, which you’d normally use to do things like putting squiggly underlines in the code to suggest changes, and maybe even implementing “quick fixes” to make those changes for you. Source Generators take this a step further and let you create entirely new code that becomes part of the project, and not just at compile time. The code that generators create is available immediately. You can reference the generated code in your own classes, even in the same project, without having to recompile everything first.

It’s not all perfect though. The mechanism is still pretty young, and the Visual Studio experience is in its early days, so the process of creating a Source Generator isn’t totally smooth yet, but once you’ve written your generator, the act of using it is pretty painless.

Planning

The first step in generating code is, counterintuitively, to write that code by hand. Start by trying to write the thing you intend to generate and make sure it works the way you expect. Work out any kinks or problems and take it for a test drive before trying to automate it. Hash the idea out before creating a factory to mass produce it. It’s usually easier to write code than it is to write code that writes code.

The ideal candidate for generated code is a pattern that you’ve nailed down already and grown tired of writing over and over again. If you find yourself saying “I never want to write another {thing}” again, then that thing might be an ideal candidate for generation.

As you progress along, compare your generated output to the example class you started with in order to identify the parts that aren’t coming out quite right or that you haven’t finished yet. Tackle a feature at a time, and when your generator can output the same (or similar-enough) code to what you wrote by hand, then you’ll know you’re done.

For this demo, I won’t make you watch the entire process of evolving the whole system in real-time, so I’ll be jumping ahead quite a bit as we go. Just know that this is how my example was developed, one piece at a time.

First Steps

Create a new class library project to hold the source generator itself. I’m calling mine “BuilderGenerator”. This project doesn’t need to target a specific framework version, so either choose “netstandard2.0” when creating the project, or alter the TargetFramework node in the csproj file as follows. At the time I wrote this, you still had to specify the language version as “Preview” as well, although that may not be the case by the time you read this.

In addition, you’ll need references to the Microsoft.CodeAnalysis.CSharp and Microsoft.CodeAnalysis.Analyzers packages. You can do this through Visual Studio’s NuGet package manager which might find a newer version, or you can copy and paste these in from the example below and then go look for updates. The choice is yours.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <LangVersion>preview</LangVersion>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.CodeAnalysis.Analyzers" Version="3.3.2" PrivateAssets="All" />
    <PackageReference Include="Microsoft.CodeAnalysis.CSharp" Version="3.9.0" PrivateAssets="All" />
  </ItemGroup>

</Project>

Remember to save your project file before continuing, or Visual Studio may get confused. This project can now be used as a source generator. We just need to fill in something for it to generate. Let’s create our “BuilderGenerator” class. You can call this class anything you want, but I’ll name it the same as the project and solution just to keep things simple. Decorate it with the “Microsoft.CodeAnalysis.Generator” attribute, implement the “ISourceGenerator” interface, and you should end up with something that looks like this.

using System;
using Microsoft.CodeAnalysis;

namespace BuilderGenerator
{
    [Generator]
    public class BuilderGenerator : ISourceGenerator
    {
        public void Initialize(GeneratorInitializationContext context)
        {
            throw new NotImplementedException();
        }

        public void Execute(GeneratorExecutionContext context)
        {
            throw new NotImplementedException();
        }
    }
}

Rather than throwing exceptions at runtime, which won’t get us very far, let’s start building out some code. We’ll start off small, and generate a class that says “Hello World!”.

using System.Text;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.Text;

namespace BuilderGenerator
{
    [Generator]
    public class BuilderGenerator : ISourceGenerator
    {
        public void Initialize(GeneratorInitializationContext context)
        {
            // Nothing to do... yet.
        }

        public void Execute(GeneratorExecutionContext context)
        {
            var greetings = @"using System;

namespace Generated
{
    public class Greetings
    {
        public static HelloWorld()
        {
            return ""Hello World!"";
        }
    }
}";

            context.AddSource("Greetings.cs", SourceText.From(greetings, Encoding.UTF8));
        }
    }
}

The work is done in the “Execute” method which is simply outputting a string containing the entire Greetings class contents to a file. This is the simplest form of code generation there is, simply adding pre-defined contents to the project that’s using the generator.

Using the Source Generator

Now that we have a source generator, albeit a boring one, let’s add it to a project to see it in action. Add a second project to the solution. I’m calling mine “Domain”. It will be taking the place of a typical domain layer and define some sample entities. This project can safely target a specific core framework version as usual. Edit the resulting csproj file to look like this:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <LangVersion>preview</LangVersion>
  </PropertyGroup>

  <ItemGroup>
    <ProjectReference Include="..\BuilderGenerator\BuilderGenerator.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="false" />
  </ItemGroup>

</Project>

Note that the reference to the BuilderGenerator project has some additional properties on it. The first is what tells Visual Studio that the referenced project is an analyzer, and not a library. The second property, “ReferenceOutputAssembly = ‘false'” states this even more explicitly. As of now, you can’t reference a project as both an analyzer and a library at the same time, although I’m hoping this might change in the future because it would make things like base classes or attributes easier to implement.

This new analyzer will generate code that lives in the Domain project itself, although your ability to see it may be limited. Support for viewing generated code is improving all the time, so depending on the vintage of your Visual Studio installation, it may or may not let you browse to the generated code easily. You can, however, make references to it from other classes and/or projects, so let’s see if we can find it. Add a unit test project to the solution. I’ll call mine “UnitTests”. Add a reference to the Domain project, and then create our first test, called “GreetingsTests”

using NUnit.Framework;

namespace UnitTests
{
    [TestFixture]
    public class GreetingsTests
    {
        [Test]
        public void SayHello_returns_expected_string()
        {
            Assert.AreEqual("Hello World!", Generated.Greetings.HelloWorld());
        }
    }
}

Depending again on your version of Visual Studio, you may see some Intellisense errors at this point, and may have to restart Visual Studio before continuing. It appears to me that Visual Studio loads up the analyzer once, when loading the solution, and will keep using that analyzer for the rest of that session. This may or may not accurately describe what is really going on behind the scenes, but it’s certainly the effect I observed. Any time I made a change to the generator itself, I’d have to restart Visual Studio to take advantage of the changes.

Once the generator itself is finished, you should be able to go about using it “live” without any problems, but the process of working on the generator itself may be a bit of a chore. If you run into this problem, restart Visual Studio and Intellisense should start working again, including auto-complete. Hopefully these errors will go away in later releases and by the time you read this, it may have been solved once and for all.

Generating Real Code

Now that we have a working source generator, let’s make it do something more useful than returning “Hello World”. I’ve posted several times in the past about my love of the Builder pattern, and how it can help make your tests easier to write and also easier to read, communicating their intent more clearly to other developers. Since this is a pattern that I have implemented using just about every available source generation system in .Net, it seems like a natural choice for my first Source-Generator-based project.

Let’s start simply. For each entity class (e.g. User), we want to generate a corresponding “Builder” class (e.g. UserBuilder). The contents aren’t important yet, just that we can generate the builder class, and that we can refer to it from other classes within the same project, or from the tests.

First, we’ll create an attribute to mark the classes we want to generate builders for. Unfortunately, we can’t just define the attribute in the BuilderGenerator project itself. Our Domain project is referencing the BuilderGenerator project, but it’s doing it as an Analyzer, not as a library, so we can’t make direct use of the classes defined there. Instead, we can either create another project to house the attribute and other common elements such as base classes, or we can have the generator simply output the attribute class the same way we generated the Greetings class above. In fact, we can simply replace the existing contents of the BuilderGenerator class to emit this new attribute class instead.

using System.Text;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.Text;

namespace BuilderGenerator
{
    [Generator]
    public class BuilderGenerator : ISourceGenerator
    {
        public void Initialize(GeneratorInitializationContext context)
        {
            // Nothing to do... yet.
        }

        public void Execute(GeneratorExecutionContext context)
        {
            var attribute = @"namespace BuilderGenerator
{
    [System.AttributeUsage(System.AttributeTargets.Class)]
    public class GenerateBuilderAttribute : System.Attribute
    {
    }
}";

            context.AddSource("GenerateBuilderAttribute.cs", SourceText.From(attribute, Encoding.UTF8));
        }
    }
}

We’ll reuse this same mechanism to generate any other attributes or base classes that we need.

Next, in the Domain project, create a folder to hold our entity definitions. I’ll call mine “Entities”. Create a new entity called “User” in the “Entities” folder, give it a couple properties, and decorate it with the “GenerateBuilder” attribute which should already be available if everything is working properly.

using BuilderGenerator;

namespace Domain.Entities
{
    [GenerateBuilder]
    public class User
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
}

Again, the contents are not important at this stage. We just want something to generate a builder for. Since this is not a tutorial on Roslyn syntax, I won’t get too deep into the workings here, but we’re going to want what’s called a “SyntaxReceiver”, which acts as a kind of filter, sorting through the syntax tree that represents all of the code for our project, and forwarding the parts we’re interested in to our generator. Add a new class called “BuilderGeneratorSyntaxReceiver” with the following contents.

using System.Collections.Generic;
using System.Linq;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp.Syntax;

namespace Generator
{
    internal class BuilderGeneratorSyntaxReceiver : ISyntaxReceiver
    {
        public List<ClassDeclarationSyntax> Classes { get; } = new();

        public void OnVisitSyntaxNode(SyntaxNode syntaxNode)
        {
            if (syntaxNode is ClassDeclarationSyntax classDeclaration && classDeclaration.AttributeLists.Any(x => x.Attributes.Any(a => a.Name + "Attribute" == "GenerateBuilderAttribute"))) Classes.Add(classDeclaration);
        }
    }
}

The name isn’t actually important, but we’ll name it for the generator it serves to keep it clear. This filter looks for any classes in the syntax tree with the GenerateBuilder attribute on them, and adds them to a list called “Classes”.

Replace the contents of the BuilderGenerator.Execute method with the following:

        public void Execute(GeneratorExecutionContext context)
        {
            if (context.SyntaxReceiver is not BuilderGeneratorSyntaxReceiver receiver) return;

            foreach (var targetClass in receiver.Classes.Where(x => x != null))
            {
                var targetClassName = targetClass.Identifier.Text;
                var targetClassFullName = targetClass.FullName();

                var builderClassNamespace = targetClass.Namespace() + ".Builders";
                var builderClassName = $"{targetClassName}Builder";
                var builderClassUsingBlock = ((CompilationUnitSyntax) targetClass.SyntaxTree.GetRoot()).Usings.ToString();
                var builder = $@"using System;
using System.CodeDom.Compiler;
{builderClassUsingBlock}
using Domain.Entities;

namespace {builderClassNamespace}
{{
    public partial class {builderClassName} : Builder<{targetClassName}>
    {{
        // TODO: Write the actual Builder
    }}
}}";

                context.AddSource($"{targetClassName}Builder.cs", SourceText.From(builder, Encoding.UTF8));
            }
        }

There are a few things going on here. First, we make sure that we’re only paying attention to the classes that have passed through the SyntaxReceiver we defined earlier. For each class that passes the filter, which I’ve called a “target class”, we extract some basic information, and inject it into a template in order to create a builder class, which is then added it to the consuming project. That class is empty for now because our first goal is to simply generate a builder for each decorated entity class.

Since we’ve made changes to the generator itself, you may need to restart Visual Studio before it starts using the new version, but if everything is working properly, a new class called UserGenerator be added to the Domain project. Just like with the attribute, this file won’t appear in the Solution Explorer, but is should appear in the Class View, under Domain.Entities.Builders, and if you double-click to open it, you should see a new “UserBuilder” class with a comment that says “TODO: Write the actual Builder”.

You can already reference this class in your other code such as in unit tests. Delete the old GreetingsTests class, and replace it with a new “BuilderTests” class that looks like this:

using Domain.Entities.Builders;
using NUnit.Framework;

namespace UnitTests
{
    [TestFixture]
    public class BuilderTests
    {
        [Test]
        public void UserBuilder_exists()
        {
            var actual = new UserBuilder();
            Assert.IsInstanceOf<UserBuilder>(actual);
        }
    }
}

Now that we have the skeleton of our builder being generated, we’ll want to start filling in the parts that make it work, starting with the backing properties for the target class’ public properties. We start by pulling the list of settable properties from the target class, and adding a property to the builder for each one. Add the following just after the declaration of targetClassFullName in the Execute method.

var targetClassProperties = targetClass.DescendantNodes()
    .OfType<PropertyDeclarationSyntax>()
    .Where(x => x.IsInstance() && x.HasSetter())
    .ToArray();

Then, replace the TODO comment with a call to a new method called “BuildProperties”.

    public partial class {builderClassName} : Builder<{targetClassName}>
    {{
        {BuildProperties(targetClassProperties)}
    }}

And finally, define the BuildProperties method as follows.

        private static string BuildProperties(IEnumerable<PropertyDeclarationSyntax> properties)
        {
            var result = string.Join(Environment.NewLine,
                properties.Select(x =>
                    {
                        var propertyName = x.Identifier.ToString();
                        var propertyType = x.Type.ToString();

                        return @"        public Lazy<{propertyType}> {propertyName} = new Lazy<{propertyType}>(() => default({propertyType}));"
                    }));

            return result;
        }

I like to make my backing properties Lazy<T> because they don’t just hold the final property value of the completed object. Builders represent the plan for how to build that object, and plans are subject to change. Using a lazy value means that the code to set the final value won’t run until the object is built at runtime.

Next Steps

I’m not going to make you read through the entire evolution of my Builder pattern here because it’s only one specific use case, but the next step would be to add a method called “Build” that creates the final object, and a set of convenience methods to support a fluent interface. If you want to see the code that makes up my own BuilderGenerator, you can find the project source on its GitHub page.

There are many different examples of Builder classes out there, and some are more complicated than others, but that’s the beauty of code generation. If I expand on my pattern, to add new features, I only have to update the template, and the changes will be reflected across all my classes, or even across all my projects.

What I’ve shown here demonstrates the basic scaffolding of a Source Generator, and how to drive that generator’s output using the existing code in a project. From here, you would iteratively add a feature at a time until the generated code matches the hand-written example you started from. You can follow this pattern to generate any kind of highly-repetitive code that you want in your projects.

Posted in Computers and Internet | Leave a comment

Don’t Repeat Yourself (Meta Edition)

“Don’t Repeat Yourself”

Developers should all be familiar with this mantra, but we tend to think of it only as it relates to the code, and not the coder. It’s the second part that I want to talk about, so skip ahead if you want to, but I’m going to talk about the traditional DRY (Don’t Repeat Yourself) principle first.

Don’t Repeat Your Code

We factor out code all the time. We move things to a base class, or extract it to a helper so that we only have one copy to maintain. If we find a flaw, we fix it once, in one place, and the rest of the code reaps the benefits.

This is infinitely better than the copy/paste reuse model in which we’d have to spend not only the time to fix the problem, but to hunt down all of the other places where this code exists and fix them too. If, that is, you are lucky enough to work on a team that values craftsmanship and long-term maintainability of the codebase. All too often, you find yourself in a situation where some piece of code has been copied and pasted into a few locations before anyone realized that it was becoming into a pattern. One developer wrote something he needed, and then someone else needed to do either the same thing, or something very similar. It’s the “very similar” where things start to fall apart.

The problem is that there are now two copies of very similar code, but they’re not quite identical. If a flaw is found, someone has to find all the instances of that flaw and correct them. This is more time-consuming than fixing one single shared instance of that code. It gets worse though. Since the instances aren’t guaranteed to be identical, how do you know if you found them all? If the variable names are different in each instance, what are you even going to search for? Maybe you’ll get lucky and there are a couple keywords in a row that aren’t instance-specific, and you can search on those to find all of the copies. Hopefully, you recognize the pattern and extract it before it becomes widespread.

Once is unique, twice is coincidence, three times is a pattern. The “Rule of Three” states that we should have refactored out the first two instances as soon as we needed the same code a third time. Hopefully you don’t work in a control-freak environment where you are prevented from touching the first two instances because they’re not “your code”, or they are locked down because the original features they were written for are considered “done” and are not to be touched in any way ever again.

Don’t laugh. I know places like that.

That’s why we have unit tests in the first place. I want to be able to prove that my change didn’t break someone else’s code. I can only do that if I have adequate coverage of that code in the first place. With good test coverage, I can refactor with wild abandon, and know that I haven’t changed any of our documented assumptions about how the code should behave.

Don’t Repeat Your Mistakes

This is what I’m really here to talk about today. I want developers to start thinking about how the DRY principle relates to their daily activities, and in particular how it relates to our mistakes. We’re human, we forget things, we slip up, we communicate poorly with our team members, and things fall through the cracks as a result. When we find the resulting flaws, our instinct is to fix them and move on. The next time you find a problem, I’d like you to stop and think about whether you can do anything to prevent it from happening again in the future. Can you stop anyone else from making the same mistake?

One way I like to combat this problem is through unit testing, but not in the way you’re thinking. Developers who are “test infected” like to put tests around everything, but most stop at the immediate feature at hand. When a bug is discovered, we write a test that demonstrates it by failing (Red). We take our best stab at fixing the bug until the system works (Green). Finally, we clean up after ourselves, extract and refactor code to help out the next guy, and use the new test to check that we didn’t break the code in the process (Refactor).

Whenever possible, I like to write a “guard rail” test that lets me know whether anyone else has done the same thing. Here are just a few examples of this kind of test.

Missing Parameters

Here’s a unit test that uses reflection to let me know if any of my API controller endpoints have unused route parameters.

[Test]
public void Path_parameters_must_match_method_parameters()
{
    var apiAssembly = typeof(Controller).Assembly;
    var baseType = typeof(Controller).GetTypeInfo();
    var controllerTypes = apiAssembly.GetTypes()
        .Where(x => !x.IsGenericTypeDefinition
            && !x.IsAbstract && baseType.IsAssignableFrom(x));
    var violations = new List<string>();
    var regex = new Regex(@"(?<=\{)[^}]*(?=\})");

    foreach (var controllerType in controllerTypes)
    {
        var methods = controllerType.GetMethods(
            BindingFlags.Public | BindingFlags.Instance);
        foreach (var method in methods)
        {
            var attribute = method.GetCustomAttributes()
                .OfType<HttpMethodAttribute>().FirstOrDefault();
            if (attribute?.Template != null)
            {
                var routeParameters = regex.Matches(attribute.Template)
                    .Select(x => x.Value);
                var methodParameters = method.GetParameters()
                    .Select(x => x.Name);
                var unmatched = routeParameters.Except(methodParameters);
                violations.AddRange(unmatched.Select(x => 
                    $"{controllerType.Name}.{method.Name} - {x}"));
            }
        }
    }

    if (violations.Any())
    {
        Assert.Fail($"The following route parameters have no matching method parameters:\r\n  {string.Join("\r\n  ", violations)}");
    }
}

This particular test depends on the fact that we’re defining our routes directly in the HttpMethodAttribute (HttpGet, HttpPost, etc.), but it could easily be adjusted to account for explicit RouteAttribute usage as well.

The test extracts the route from the parameter, uses a RegEx to find all of the parameters in the format {param}, and then compares them to the method’s parameters. If there are any unused route parameters, the test fails.

Note: I probably could have collapsed the nested foreach loops into a single LINQ projection if I wanted to (I’m known for that), but in the case of this test, the readability was more important. I wanted other developers to be able to look at this and see exactly what was going on.

So what have we accomplished? Well, for one thing, no-one can accidentally leave out a route parameter without breaking the build. I’ve not only found and fixed my immediate a problem, but I’ve prevented myself or anyone else from repeating the mistake in the future.

Bad References

Here’s another example. Tools like ReSharper make it extremely easy to add missing references to a project, or “using” statements to a class. Tools make it so easy, in fact, that you can accidentally add a reference to something you shouldn’t. Here’s a test I made to make sure that no-one adds a reference from the API assembly to the Data assembly. There’s a service layer in between these two assemblies for a reason.

[Test]
public void The_Api_project_should_not_reference_the_data_project()
{
    var apiAssembly = typeof(Api.Startup).Assembly;
    var dataAssembly = typeof(Data.Startup).Assembly;
    var referencedAssemblies = apiAssembly.GetReferencedAssemblies();
    referencedAssemblies.Any(x => x.FullName == dataAssembly.FullName)
        .ShouldBeFalse("The Api assembly should not directly reference the data assembly.");
}

It’s simple, right, but it’s going to save me a huge headache caused by an errant “Alt-Enter-Enter”. I litter my test projects with these kinds of “structural” tests to raise a red flag if anyone repeats a mistake I’ve made in the past.

Right Concept, Wrong Place

The difference between Unit and Integration tests seems obvious to me. The former tests system components in isolation from one another, the latter tests the complete, fully-assembled, real-world system from end to end. It may do so against a fake database, or on a secondary network that simulates the real production system, but the important thing is that it’s the real code doing what the real code will really do. To make sure someone doesn’t get the testing patterns in the wrong place, here’s a one line test to make sure that the integration test assembly doesn’t touch the mocking framework.

[Test]
public void Integration_tests_should_not_reference_Moq()
{
    GetType().Assembly.GetReferencedAssemblies()
        .Any(x => x.Name == "Moq")
        .ShouldBeFalse("The Integraton test assembly should not reference Moq.");
}

A fresh, junior member of the team that’s new to the concepts and the differences between them will be reminded by this test if they are applying the right patterns to the wrong set of tests.

Right Attribute, Wrong Layer

In my most recent project, the DTOs (Data Transfer Objects) are returned from an API to a public consumer. They are, more or less, the ViewModels of this system, and as such, they need to be validated when they are posted back to an endpoint. We can do this easily through the use of the DataAnnotation attributes, but we need to make sure we’re using the right one. MaxLengthAttribute is used to control how Entity Framework will generate migrations, whereas StringLengthAttribute is used by MVC to validate models. They are very similar, but they are not the same. It’s very easy to slip up and use the wrong one, and then your validation won’t work.

These two tests work in tandem to make sure that you are using the StringLengthAttribute, and that you aren’t using the MaxLengthAttribute on any DTOs. Of course, this depends on your DTOs having a common base class.

[Test]
public void String_Dto_properties_should_not_use_MaxLengthAttribute()
{
    var baseType = typeof(Dto).GetTypeInfo();
    var types = baseType.Assembly.DefinedTypes
        .Where(x => !x.IsAbstract && baseType.IsAssignableFrom(x));
    var propertyTypes = types.SelectMany(x => x.GetProperties())
        .Where(x => x.PropertyType == typeof(string) 
            && x.GetCustomAttributes<MaxLengthAttribute>().Any())
        .Select(x => $"{x.DeclaringType.FullName}.{x.Name}");
    if (propertyTypes.Any())
    {
        Assert.Fail($"The MaxLengthAttribute is for controlling database generation. DTOs should use the StringLengthAttribute instead.\r\nThe following String DTO properties incorrectly use the MaxLength attribute:\r\n  {string.Join("\r\n  ", propertyTypes)}");
    }
}

[Test]
public void String_Dto_properties_should_specify_StringLength()
{
    var baseType = typeof(Dto).GetTypeInfo();
    var types = baseType.Assembly.DefinedTypes
        .Where(x => !x.IsAbstract && baseType.IsAssignableFrom(x));
    var propertyTypes = types.SelectMany(x => x.GetProperties())
        .Where(x => x.PropertyType == typeof(string) 
            && !x.GetCustomAttributes<StringLengthAttribute>().Any())
        .Select(x => $"{x.DeclaringType.FullName}.{x.Name}");
    if (propertyTypes.Any())
    {
        Assert.Fail($"The following String DTO properties have no StringLength attribute:\r\n  {string.Join("\r\n  ", propertyTypes)}");
    }
}

Check Yourself

These are just a few examples of kind of “structural” or “compliance” tests that I write on a typical project. If I can stop myself from making the same mistake twice, then that’s good, and will save me time down the road in the form of bugs I don’t have to investigate.

Posted in Computers and Internet | Leave a comment

Notes: Including and Excluding Assemblies from Code Coverage

Every now and then I’ll be throwing some “Notes” out there. These are things that I’ve come across before, solved before, and then somehow forgotten about. Inevitably, the problem happens again someday, and I spend far too long searching the web for the solution that actually worked last time.

In the interests of helping my fellow programmers, and future me, I’ll just put these out there as part of my permanent record and maybe it’ll turn up in your search someday. Maybe that’s why you’re reading this right now. Maybe it’s why I’m reading this in the future. In that case; You’re welcome, future me.

Collecting code coverage in Visual Studio is pretty easy, as long as you have access to the Enterprise Edition. Fortunately, I do. I know not everyone does, and I think it’s ridiculous that Microsoft made code coverage an Enterprise feature. This is something we want to encourage the youngest of interns to absorb, not something reserved for senior Greybeards like me.

Anyway, inevitably, whenever I start analyzing coverage, I always run into the same problem. The test assemblies themselves have been counted in my totals. Since the tests all get run, the test coverage for those assemblies is nearly 100%, and it skews my overall results, making it look like my team’s coverage is better than it really is. What I want to do is exclude the test assemblies from the coverage numbers, but the ExcludeFromCodeCoverageAttribute can’t be applied to entire assemblies, and I don’t feel like adding it to every individual test class, so I fire up a web search and end up at the Microsoft documentation for .runsettings files. I follow the advice in the documentation, create a .runsettings file with nothing but a ModulePath Exclude section for and the next time I gather coverage, I see a bunch of assemblies that aren’t even mine. I’m talking about Nunit, JSON.net, etc.

The problem is that as soon as you specify modules to exclude, everything else gets included by default. You need to specify an Include section as well, but the syntax can be a little weird because it’s a RegEx, and now we have two problems. I try to write a simple RegEx to match only the files in my project, but the problem is that the files that aren’t mine are located in folders named after my projects, so my “magic words” are still part of their path, and they therefore get caught up in the results. I need a RegEx that will look for my root namespace as part of the filename, but not as part of the complete file path.

Enough already. What’s the answer?
Okay, here are the magic words. Assuming my root namespace, perhaps the name of the client, is “Foo”, my runsettings file will look like this:

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
    <RunConfiguration>
        <MaxCpuCount>1</MaxCpuCount>
        <ResultsDirectory>.\TestResults</ResultsDirectory>
        <TargetPlatform>x86</TargetPlatform>
        <TargetFrameworkVersion>Framework45</TargetFrameworkVersion>
    </RunConfiguration>
    <DataCollectionRunSettings>
        <DataCollectors>
            <DataCollector friendlyName="Code Coverage" uri="datacollector://Microsoft/CodeCoverage/2.0" assemblyQualifiedName="Microsoft.VisualStudio.Coverage.DynamicCoverageDataCollector, Microsoft.VisualStudio.TraceCollector, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
                <Configuration>
                    <CodeCoverage>
                        <ModulePaths>
                            <Include>
                                <!-- Include only our own (Foo) assemblies -->
                                <ModulePath>.*\\Foo\.[^\\]*\.dll$</ModulePath>
                            </Include>
                            <Exclude>
                                 <!--Exclude the test assemblies themselves -->
                                <ModulePath>.*Tests\.dll</ModulePath>
                            </Exclude>
                        </ModulePaths>
                    </CodeCoverage>
                </Configuration>
            </DataCollector>
        </DataCollectors>
    </DataCollectionRunSettings>
</RunSettings>

It’s that middle part there that’s the important bit, especially the RegEx in the Include section. It’s looking for the word “Foo” followed by any number of characters that aren’t a backslash (because I’m on Windows), followed by “.dll”. Users of other operating systems may have to adjust the direction of the slashes accordingly… I haven’t looked yet

Then, I have a more regular-looking Exclude section to ignore any file that ends in “Tests.dll”, which is the convention I happen to be following (e.g. Foo.UnitTests.dll”, “Foo.IntegrationTests.dll”, etc.)

Hopefully this helps someone. If you’re a RegEx ninja, please don’t kick sand in my face. If you’d like to share a better way of expressing this, I’m open to suggestions, of course.

Posted in Computers and Internet | Leave a comment

Fit and Finish

My father’s “man cave” was always covered in sawdust. He was a woodworker by nature. He had a normal day job, but I think in his ideal world he would have preferred if people had just paid him to stay in his wood shop all day, building things. He made things around the house, and he made things to sell. He even made a seven-foot long, three-story plantation dollhouse for a show once. It was incredibly detailed, down to the wooden roof shingles, individually cut by hand on the band saw.

My whole life growing up, I was surrounded by his tools. I cannot remember a time before that band saw was a fixture in the shop. Occasionally an older, cheaper tool that he’d started with would be replaced by a newer, better one because he’d earned it. The loud, portable Sears table saw that had moved with us through three states eventually made way for the shiny new Delta that was eerily quiet by comparison. Its motor didn’t scream electricity at you. The Delta just sounded like a powerful wind being thrown from the blade like the loudest whisper in the world. The old saw had a fence that you had to adjust by banging it around with your fist, and then tapping it with a mallet to get it as close to what you wanted as you could manage. The new one had a latch in two-parts with a thumbwheel between them that let you dial in the finer details with extreme precision. He used it to make things better and faster than he could before. Don’t get me wrong, as a true craftsman he achieved fine results no matter the tools, but it was much more enjoyable with better equipment and it let him try things he may not have attempted before.

Woodworking was more than a hobby, it was his zone. I think it just made sense to him, and maybe it’s why he didn’t understand why his son just wanted to “play on the computer” all the time. At some point in high school we had a talk one day where I tried to make him understand that I was making things too. I was just doing it with different tools and materials. He never really tried to talk me into leaving my room to build “real things” much after that, and I’ve always regretted that I may have hurt his feelings that day. I never really got the chance to clarify whether I’d simply turned him off of the idea that I’d be like him, or whether a light clicked on and he saw that we were already the same. I wish he was still around to talk to. I’d like to let him know that he had far more influence on me than he ever realized.

I started using the phrase “Fit and Finish” to describe my professional values a few years ago. I’ve worked for a lot of different companies, with widely varying cultures. I’ve worked for small companies that absolutely shared my values, and for large companies that maybe didn’t, but at least had a smaller division within them that did. I’ve been part of the screening process that determined whether a candidate was the right cultural fit for that division, and whether to keep someone on when they weren’t. I’ve had to remove myself from more than one company that did not share my values, or said they did on paper, but didn’t take them seriously. Some organizations say they share my core values, but they really only view them as marketing terms to be promised to clients, but never fully pursued and realized. Liberal use of the word “craftsmanship” makes it sound like you know what you’re doing, and the odds that the client will know the difference are vanishingly small, otherwise they wouldn’t need to hire you, right?

My upbringing makes it somehow impossible for me to just throw something out there that’s good enough to fool everyone else and move on. Like any artist, I know where every flaw is, and I can’t unsee them. Yes, I called myself an artist, and what I do “art”. That’s how I view it, and that’s how I treat it, otherwise what’s the point? If you’re just in this industry for the paycheck, then I can’t even relate to you. If your company consistently values short-term profits over long-term reputation, then we simply can’t work together. I have a need to feel proud of the work I do. I need it to be understandable and maintainable by the next member of the team.

Some companies only want to think as far as rough framing; what I consider carpentry. Me? I’m into fit and finish; what I consider cabinetry. They’re different. Just because it’s the shape of a house and keeps the rain out doesn’t make it a home. There is a ton of work that goes into “finishing” a house, and you can expect that all of that work will be inspected by someone whose job it is to find any mistakes in the plumbing or electrical work before they are sealed up behind the drywall. Then, someone is going to inspect the drywall before the trim goes up. Finally, someone’s going to inspect the trim.

That’s not to say that everything has to be perfect. One company that I left called me a perfectionist on a review, and that was somehow meant as a negative comment. Clearly this relationship was not going to work out. I am not a perfectionist, not by a long shot, but there is a minimum bar of quality and correctness that I won’t compromise, and I think it comes down to my definition of “good enough”.

Good enough to pass code review is not good enough. Good enough to pass QA is not good enough. Good enough that the client won’t notice the bugs and limitations during UAT, or during the warranty period is not good enough. Good enough to get the job done, however, IS good enough. My current project uses ASP.Net MVC’s built-in Inversion of Control (IoC) container. Why? Because it’s good enough. It does everything I need, and should continue to do so. If I run into a scenario that it can’t handle, then maybe I’ll swap it out for something more full-featured like StructureMap. Until then, it’s truly “good enough” because it’s all we need.

Maybe I should say it’s “good enough for now”, which is a subtly different concept. Did you choose a component because you aren’t exactly sure where a project is going to end up, and you don’t want to invest time in something you may not need? Then you’d be correct in going with “good enough for now”. If you chose a component because someone else, perhaps the client’s internal development team, will be taking over the project before it ever has a chance to become “your problem”, then that’s a bad choice, and a terrible attitude. If you justify it by smugly saying the word “agile” at me, I will kindly ask you to leave my team.

You can find a buzzword to justify anything you want. If you blurt out “yagni” because you simply can’t be bothered to draw on experience, or worse yet have insufficient experience to draw on, then you should go work on smaller things for a while and learn. When a more experienced developer says you’re going to need something, you’re probably going to need it. Why? Because they know enough to know that you always end up needing it. So go ahead and tell me why we don’t need to build logging into the system from day one, and you can have fun retrofitting it onto your other systems later on when you turn the corner. The voice of experience should be listened to. People know things.

One of the things I try to drive in every team I’m a part of is consistency. I don’t want five different ways of solving the same problem. That doesn’t just mean refactoring out repeated code. I’m talking about patterns. One of my pet peeve symptoms on an older system is observable strata in the code. Can you view your codebase like an archeological dig? Can you tell which controllers were written during the “Bob” era and easily tell them apart from the ones written in the “Fred” era? Are there old controllers or services that are written differently than the current ones? If so, why? If the old way was “good enough”, then why is there a new way? If it wasn’t good enough, then why wasn’t the old code updated?

Some teams are uncomfortable going back and messing with “finished” code because they’re afraid of breaking things. That’s what unit tests are for, my friend. If you’re not on board with the value of testing, then you’re just not on board with quality as a selling point in the first place, and that’s an entirely different conversation. I should be able to fearlessly change code patterns and have my confidence backed up by a wall of green lights telling me that I haven’t broken anything. I should use professional-grade refactoring tools to make certain kinds of changes for me, and have absolute confidence that the resulting code is functionally equivalent to the code I started with.

Approaches and patterns change. We grow, we learn, we adapt, and that’s fine. Don’t leave old code behind though. Upgrade it to the new pattern or don’t have a new pattern, you just have one more way of doing the same thing, and that’s bad.

Returning to the house metaphor, imagine the kitchen cabinets. You’d expect them to be installed consistently, right? You’d expect all of the knobs and hinges to be the same type, and installed at the same height, right? If, partway through the construction of your kitchen, one of the apprentice carpenters with something to prove discovered that mounting the hinges just one inch further toward the vertical center of the door relieved stresses on the hardware, and could be mathematically proven to increase the hinge’s lifespan by fifty percent, would you want him to hang the remaining doors with the hinges positioned differently than the ones he’d already finished?

If that apprentice decided that given the average height of your family members, moving the knobs on the cabinet fronts downward by two inches would result in a better, more ergonomic experience, would you want him to install the remaining half of the knobs in a different position than the first half? Of course not. The inconsistency would be right there, staring you in the face the entire time you live in that house. You’d never unsee it. It would irritate you every single time you look at the kitchen cabinets and see that half of the knobs are different.

It’s not that different when developing software. I don’t want to have to keep track of the difference between the “Bob era” and “Fred era” controller patterns. If Fred is unwilling to upgrade all of Bob’s older code to fit his new pattern, then Fred should just create an entry on his “to do” list until he either finds the time to upgrade everything, or starts on the next project where he can adopt the new pattern up front. If Fred truly needs to make a change because Bob’s pattern won’t allow him to do something that he absolutely needs to do, then we might be able to make an exception, but that’s different than establishing a new pattern. Never change how things are done “from now on”. Change how things are done… period.

You might be asking “why?” Is it just me being pedantic about things, or obsessive over consistency? No, there are reasons. When I bring a new developer onto the team, I want them to fall into the pit of success. When I assign them their first controller feature, I want the rest of the controllers to be examples of the way things are done on this project. I don’t want to have a conversation where I have to explain what the good examples are versus the bad ones. On top of everything else the new member has to learn, why should he or she need to learn the underlying archeological history and strata of the project, and remember which examples are current? I want them to find “current” at every turn. I want my projects to look like they were made by professionals.

If I hand this project off to someone else, how difficult will the knowledge transfer be? How much tribal knowledge is required to work on it? As a consultant, it is my job to leave. When that happens, I want the developers that take over the project to look around and say “that makes sense”, or at least “that’s consistent”. I don’t want them to find five different ways of solving the same problem with no objective way to figure out which one is “right”. What if they’re both right, and Bob and Fred simply have different opinions and styles? Can the new guy tell that by looking at the code? No, he can’t. All he can tell is that there are two different ways of accomplishing the same task.

“Fit and Finish” is about looking like you know what you’re doing. It’s about looking like a professional, and not someone who’s just winging it. In a previous post, I talked about how we, as an industry, are still trying to figure out what we’re doing, and that still applies. Everything we do will still be laughably out-of-date in two years time. It’s okay to not have all the answers, but you owe it to the clients that are paying you to do the best you can with the knowledge you have, and to pass that knowledge on as simply and easily as possible. Your code shouldn’t be a loose collection of clever hacks. It should be a purposeful, thoughtfully composed system of components designed to work with and compliment each other.

Take pride in your work. It matters. It shows.img_20181020_144558

Posted in Computers and Internet, Programming | Tagged | Leave a comment

The Eternal Journeyman

It seems like every “luminary” in the world of software development has chimed in on this one at some point, so why not one more, right? Sure, I’m not a famous “known name” in the software world. I’ve done some cool and important things, but you’ve probably never seen or used them, at least not personally. If you live in Ohio then you’ve definitely been a consumer of my code, but you’ve still never seen it. It’s invisible stuff running in multiple Ohio government agencies, quietly shuttling your information from place to place. If you have a concealed-carry license, parts of your background information totally passed through my code. And before you ask, no, I didn’t siphon any of it off.

I work on stuff that affects lots of people, but I’m not well known. So what do I have to offer to the conversation? What I have is another perspective, from a guy who aspires to the ideals of software craftsmanship, but has enough impostor syndrome to keep me humble, and certainly enough to stop me from declaring myself a “Master Craftsman” and telling you how to do things. Claiming the title of “Master Craftsman” actually flips your bozo bit pretty quickly in my head. I’m just a guy that writes code, and sometimes I share things with others.

I am not a master craftsman, and neither are you. Nor is anyone in our entire industry. How could we be when everything we know becomes obsolete every two years? That’s my point. That’s my perspective. I work in an industry where none of us are ever really going to truly “get there”. We will never achieve mastery.

We, as humans, have a lot of industries down to a science. We’ve been building actual physical buildings for millennia, and we’ve got that pretty much figured out, right? Every now and then someone comes up with some new high-tech composite material and shifts the landscape a bit. We develop large-scale computer modeling, C&C milling, thinner glass, and  suddenly our art museums start looking less like boxes and more like melty organic blobs. That’s just the skin though. The fundamental knowledge of how to safely prop up a structure and not impede the traffic flow within it hasn’t really changed that much. We’ve got the core science down, and we’ve had it down for generations.

Manufacturing, Automotive, Consumer Electronics, these are all industries that undergo constant evolution, but the core ideas of what we’re doing and how we do it are pretty much smoothed out. The rough edges have been sanded off, and the general “shape” of the industry doesn’t change that much. We get better at making chips smaller and smaller. We make more efficient CPUs by stacking more transistors in less space, but we don’t just throw out the transistor altogether and start using frob nodules instead. At least not yet. Some major shift will happen somewhere over the horizon and it will change how we do things, but for now it’s transistors and heatsinks.

Compare that to the software world. We’ve only been talking to computers since the 1960s. This industry is still in its infancy, and we’re changing our minds about the right way to do things on a daily basis. We don’t have as many languages in play as we used to, and the business world has largely settled on .Net, Java, and PHP as the main ways we get things done, but we still have these major tectonic shifts happening every now and then. The last big one was when everyone got all excited about functional languages and how they were going to change everything we do. Except they didn’t. They dominated our user group and convention topics for a year or so, changed how we do a few things out on the periphery of actual business, and then they faded out of the limelight. When’s the last time your local user group hosted an F# talk? Yeah, that’s what I thought.

And now it’s everything “in the cloud”. But which cloud? Azure? AWS? Should these things we’re putting in the cloud be containerized? Which container? Docker? Do we need Kubernetes? How do I even begin to pick one? What if I pick wrong? What happens if I advocate for building a client’s critical systems on top of Azure and then Microsoft loses interest and walks away like they’ve done with so many other things? Anyone remember Zune? Yeah… I have three or four of those. Windows Phones? Same thing. I have a drawer full of old Windows Phones. Microsoft changes directions like a crack-addled squirrel trying to cross a busy intersection. What if I’d told a client to build their front end on Silverlight? Now I’m stuck being the “Silverlight guy” while everyone else moves on to newer things.

Years ago, I noticed a pattern forming. Any new Microsoft technology that I personally got behind would get killed off. I am apparently the kiss of death for all things Microsoft; So much so that friends made me promise not to get a Hololens because they wanted it to be a thing. They still want it to be a thing… and it still isn’t. Maybe I should just go ahead and buy one just to put a bullet in its head once and for all. I’m frankly surprised that the Surface line is still around since I actually bought one of those. But I digress.

The point is that our industry is nowhere near settled on what we do. I’ve spent the last few years actively avoiding the front end of web applications because there’s still so much churn going on over there. Knockout, Angular, React, Ember, Vue… everyone wants to change the world of web applications, and I’ve tried to avoid the whole mess until the dust settles. It’s not that I’m jaded. I’ve just backed too many losing horses in the past and I’m experienced enough to know that, in all likelihood, none of these frameworks will emerge as the eventual winner, so I’m not hitching my wagon to any of them.

My prediction is that something far less revolutionary will come along. It will seem quiet and tame by comparison. It will make just enough sense that It will quietly take over as the boring but safe choice for actually getting stuff done in the same way that jQuery and Bootstrap became the de-facto tools in their areas. I also predict that someday we’ll look back on the chaos that was the front-end landscape of the early 21st century, and we’ll regret every single choice we made, no matter how right it seemed at the time.

Despite the metaphor that occasionally gets thrown around by leaders in our industry, this isn’t like Samurai in feudal Japan. You can’t just demonstrate a few katas at a new dojo (job) to easily establish your rank and standing because all of your katas are so two years ago. This is more like working for years to finally achieve your black belt in a particular style, only to suddenly find that your country is being invaded by foreigners from some strange new land that have a non-standard number of arms and a totally different center of gravity. Everything you know is wrong and you have to start all over again. You’re not a master anymore. You’re just a highly-experienced apprentice.

FlyingMachine

You know those old timey black and white films of men crashing their ill-conceived “flying machines”? We laugh at their idiocy, at the fact that they tried to fly without the most basic high-school-level understanding of aerodynamics and lift. What did they think they were doing? Yeah… Well that’s us. We have absolutely no idea what we’re doing. History will look back at our feeble efforts and laugh mercilessly at us.

In the midst of all this constant change, I refuse to believe that any of us can call ourselves master craftsmen. We are all journeymen at best, and will be for the rest of our careers. The only people who can call themselves “master” are those who keep doing the same thing for a significant period of time. They are the Fortran and Cobol programmers of the world. The ones who came out of retirement and commanded ridiculous salaries in the late ’90s preparing for Y2K because it was easier to dust them off and overpay them than it was to convince fresh, new developers to train up on skills that they’d be throwing away in a couple years time once the crisis was over.

Many of you are too young to remember when Y2K was the big scary monster that was about to bring everything crashing down. Evangelicals prepared themselves for the end times, and normally level-headed families stockpiled food and ammunition. We were expecting to wake up January 1st, 2000 with no power and no phones. Our banks were going to be on fire, and the fire trucks weren’t going to start. Violence and looting in the streets, dogs and cats living together, mass pandemonium. I was convinced that the phone system would crash, not because of the actual bug, but because we were going to simply overload the thing when everyone phoned their Mom first thing in the morning on 1/1/1 to make sure everything was okay and vice versa.

But none of that happened, and you want to know why? Because there were armies of true masters of an obsolete craft out there that scrambled to rewrite the world in time to save us all. These were men and women who still held a mastery over COBOL long after most of them had been forced to retire or move on to new and unfamiliar languages and idioms, and in that regard, they were not modern masters. In their new roles, and in their new languages, they were just like the rest of us, scrambling to keep up. But they were masters at their particular game. A game no-one else was playing anymore. They were Samurai. They were Jedi. And for the last year of the twentieth century, they were gods.

Someday, our great great grandchildren may have our industry well and truly sorted out once and for all, and maybe they’ll be able to call themselves master craftsmen, but not us. No way. We need to come to terms with the fact that only the very core motivations of our industry are settled. The general approach has been worked out, but not the specifics of implementation. The implementation is our best flailing attempt to build something for a client using the primitive stone tools we have available at the moment. We’re just coming into the bronze age here, and we think we see, ever so vaguely on the dim and foggy horizon, what the future looks like, and we’re still probably wrong about that. Machine Learning, Artificial Intelligence, Natural Language, all of that is becoming commonplace. You can’t attend a software conference without tripping over an ML presenter these days, but are we really going to use it for the day-to-day business of moving money around and balancing accounts? I don’t know… maybe?

So should we give up on the idea of Craftsmanship? Should we cut corners, and just do “whatever it takes” to get something (anything) shipped? Do we give up on getting things “right” and just get them done? I mean… why bother if we’re just going to throw it all out in the next big rewrite anyway, right?

Wrong. We still need to be taking  pride in what we do, and we need to leave behind code that the next guy can understand and improve on. That is how we learn and move forward. We still need to build to the best of our abilities, even though we know that someone will roll their eyes at our code in the future. With any luck, that person is just us, and the thought in our head will be “What was I thinking?” and not “What idiot wrote this?” We need to feel good today about the code we’re writing, even if we probably won’t feel the same way about it two years from now. If there’s one thing we can learn from the other industries, it’s this:

If it’s worth building, it’s worth building well.

Notice that I don’t say “correctly” or “right” here because whatever we do will inevitably be wrong in a few years time, but it needs to be as correct as we can get it for now. The better you build it, the longer it lasts. Do you think your grandchildren will be fighting over who gets your Ikea desk after you die? What about Great grandpa’s handmade, roll-top writing desk with the white oak and black walnut compass rose inlay? They’re the same thing, right?

Craftsmanship is not a destination, it’s a journey. You will probably never reach true mastery in your lifetime. But that’s not the point, is it? Who wants to be “done” anyway. Where’s the fun in that? I mean, sure, I’d like the occasional “vacation project” where it’s all stuff I already know, and I get to feel super smart for a few months, but I’ll always find that the world has moved on while I was enjoying my sense of mastery.

When someone asks me “What do you want to be doing in five years”, my answer is always the same; “This, but better”.

Posted in Computers and Internet, Programming, Work | Leave a comment

Pluralsight Course Updated

For those who have watched my Pluralsight Course, it has been recently updated to include changes brought by Raspbian Stretch. I wasn’t able to completely refresh the content end to end, but the former CrashPlan module has been completely replaced, and now talks about setting up remote backups using Duplicati.

Everything else up through Module 9 has been refreshed, updated, and had content replaced where possible. In some cases, this is simply an overlay on the video indicating changes, but there are numerous places where narration and video have been updated in-place to bring the course up to the current OS and software.

So, of course, they released a new Raspbian mere days after my updates. <sigh/>

Anyway, if you’ve watched my course in the past, I thank you, and suggest that you might want to go check out the updates. If you haven’t seen it, and you have a Pluralsight subscription, then you should check it out.

Thanks.

Posted in Computers and Internet, Home Server, Raspberry Pi | Tagged | 3 Comments

External SSH access on the Raspberry Pi

Over the course of this series, I’ve shown you several different ways to access your Raspberry Pi Home Server remotely. We’ve looked at OpenVPN, for connecting to your home network, RealVNC for opening a remote desktop session, and SSH for opening a remote terminal window. The first two options have worked whether you are on the same network at the time or not, but the last option only works when you’re at home, or already connected through VPN, at least so far.

In this post, I’m going to punch a hole in my router’s firewall to allow external SSH access to my server. I am doing this to support some other tools that I’ll discuss in future posts, but for now, I’m just going to get basic external SSH access up and running. If you’ve enabled SSH on your Pi, and can already connect to it over the local network, then all that’s required is to open up your router’s admin page, and map a port from the outside to port 22 on the Pi.

There, that was simple. It’s also asking for trouble. If your pi user’s password is nice and strong, then it’s not asking for a lot of trouble, but there’s always the possibility that someone’s going to come knocking and try to brute-force their way into your Pi, and therefore your network.

My first piece of advice would be to NOT simply map port 22 on the outside world to port 22 on the Pi. That’s the first thing a hacker’s going to try. The second thing they’ll try is port 222, then port 2222 and so on. Let’s not make it TOO easy, right? Apart from security concerns, there’s also the simpler problem that you can only map each external port on the router to a single device on the local network, so if you have multiple devices that you’d like to connect to using the same protocol, they can’t all use the same port. You’ll need some way to tell them apart.

For this example, I’m going to go with a simple convention of one or two digits to identify the device I want to connect to, and three digits for the port. If my server’s internal IP address is 192.168.1.5, and I want to talk to port 22 (SSH), then my external address might be 5022. A different server, at 192.168.1.15 would use 15022 for the external SSH port. Get it? This also means that I can remember which port goes to which computer easier. This starts to fall apart for higher-numbered ports, since port numbers only go up to 65535, so you might need to abbreviate things later on, but let’s at least start with something vaguely mnemonic.

That’s some basic security through obscurity, but we can do much better than that. Odds are, you’re using a fairly basic password for your pi account. Maybe you’ve added some special characters and some capitalization to strengthen things a bit, but you need to ask yourself “Is this password strong enough to protect all my stuff from evil?”. If you’re not totally confident in your password’s strength, then it’s time to take it to the next level.

Rather than using passwords to secure SSH access, lets set up public/private key-based authentication instead. You can think of a public/private key pair as kind of like a super-password. The private key is a way of asserting your identity, and the public key is a way of verifying that assertion. The private key is way more complex than you could ever hope to remember, and certainly more than anyone using current technology could brute-force their way through within our lifetimes.

You may have used keys such as this already in order to connect with systems like GitHub. If you have already generated a key, then you can skip this step and use the keys you already have. There’s no reason you can’t use the same key pair for any number of different services. 

Check your home directory for a hidden subdirectory called “.ssh”. For Linux users, this will be at “~/.ssh”, for Windows users, it will be at “C:\Users\USERNAME\.ssh”. If there are already files in that directory called “id_rsa” and “id_rsa.pub”, then you already have a key pair. If you’re missing just the public key, then keep reading. The public key is easily recreated, and we’ll get to that in just a minute.

Generate a key pair

Assuming you don’t already have a key pair on the device you want to connect from (the client, not the server), you’ll need to generate one. For Windows users (like me), this will be very different than it is for other operating systems. I’ll start with the command line instructions for Mac and Linux users first.

Mac & Linux Users

All Mac and Linux users need is one command.

ssh-keygen

The tool should prompt you for everything you need. You can accept the default for the filename, which will be id_rsa, and stored in your home folder, under a “.ssh” directory. You’ll also be prompted for a passphrase. This is optional, but assigning a passphrase means that even if someone got access to your computer, they still wouldn’t be able to SSH into your server without knowing that passphrase. This is your choice, and I won’t judge you if you leave it blank. You should now have two files in ~/.ssh directory called “id_rsa” and “id_rsa.pub”. That’s it, you’re done. Skip ahead to “Installing the public key”

Windows Users

For Windows users, I’m assuming you’ve already installed PuTTY, since I’ve used it for this entire series so far. If not, go install that now. It’s not the prettiest website in the world, but the tool is the de-facto standard for SSH in the Windows world, although I hear true OpenSSH is on the way for Windows users. We’ll need the “puttygen” tool that gets installed along with PuTTY. You can simply press the Windows key, and type “puttygen” to run it. The program looks like this:

Puttygen

Press the Generate button, and move the mouse around in the blank area until the key is generated. When it’s complete, it will look something like this:

Puttygen2

The public key is in plain text format in that central textbox. It’s also conveniently highlighted, so you can simply right-click it and copy it to your clipboard. We’ll need it in just a minute.

Press the “Save private key” button, and save this file to the .ssh folder under your home directory (e.g. C:\Users\Mel\.ssh). Create the folder if it doesn’t exist already, and call the file “id_rsa” by convention.

Missing .pub file?

If, for some reason, you have a private key file (id_rsa), but you don’t have the matching public key file, then there’s an easy fix. Remember that the public key is just a way of validating the private key. All the information needed to generate a public key is contained in the private key. For Mac and Linux users, you can recreate the public key file like this:

ssh-keygen -y -f ~/.ssh/id_rsa > ~/.ssh/id_rsa.pub

For Windows users, click the “Load” button in PuttyGen, and load up the id_rsa file. The rest of the UI will fill in, and you can right-click in the public key text box, and “select all”, then right-click again and copy it to the clipboard. You can also save the public key into a file, but it won’t be in the right format for the Pi to consume. What you really need is right there in that textbox, so just copy it to the clipboard.

Installing the public key

Next, you’ll need to install the public key onto the Pi, which will allow the Pi to validate the private key when it sees it. You do this by tacking it on to the end of a file that may not exist yet. Remote into the Pi either through SSH or VNC, get to a command line, and edit the authorized_keys file.

sudo nano /home/pi/.ssh/authorized_keys

Windows users can just paste in the public key we copied to the clipboard above. Mac and Linux users will need to get its contents from the id_rsa.pub file we generated earlier. Copy its entire contents to your clipboard on the client computer where you generated it, and then paste it into the nano editor on the Pi. Close and save the file.

For Mac and Linux users, this is all you should need. Windows users will need to install the new private key into PuTTY itself. Load up an existing profile, or create a new one with the internal IP address of the server, expand the “SSH” section on the left, and then click on the “Auth” node.

CaptureClick the “Browse” button, and then go find the id_rsa file in your .ssh folder. Scroll the left-hand section back up to the top, and click on Session. Give the new session a name, and save it. If you loaded an existing session, then clicking Save will update it. Either way, PuTTY should remember the private key now. Click “Open”, and you should get a login prompt as usual, but after you enter the username, you won’t be prompted for a password. That’s it. You’re authenticating using keys.

Locking it down

We’ve laid the groundwork for securely accessing your Pi from outside your own network now, but it’s still possible to log in using a plain name and password. If you were to SSH to your server from a different computer (or PuTTY profile), it would just go on asking for a name and password like it always has. We’ve made logging in more convenient if you have a key, but we’re not yet requiring a key. We need to turn off password-based authentication next.

Edit the SSH configuration file

sudo nano /etc/ssh/sshd.conf

If the file appears blank, try using the filename “sshd_config” instead. I’m not sure when the naming change occurred, but I’ve seen it both ways. If your server is older, it may be using the other name.

Scroll through the file, and look for the following values (or use ctrl-w to search for them), and set them accordingly.

PermitRootLogin no
PubKeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no

If this is the first time you’ve edited this file, then some of these lines may be commented out. Remove the pound sign from the beginning of a line to uncomment it. Finally, restart the SSH service to enable the changes.

sudo service ssh reload

All that’s left is to map that external port if you haven’t already, and you’re ready to connect from the outside world. Windows users will need to make a new PuTTY profile with the key and using the external IP address of your home network, rather than the internal address of the server. Unless you have a static IP address at home, you’ll need some kind of dynamic IP service such as no-ip.org, which I covered in the OpenVPN post.

Posted in Computers and Internet, Home Server, Raspberry Pi | Leave a comment

CrashPlan is dead

It looks like Code42 isn’t interested in home users anymore, and they’ve announced that they are shifting focus to enterprise users. Even if you were backing up to your own computers, your account is still going away, and with it your ability to back up your stuff.

Now is the time to start looking for alternatives. I don’t have a recommendation yet, but I’m looking into it. I’m interested in hearing what everyone else is using, and how it’s been working out. I’d like to be able to recommend a drop-in replacement for the CrashPlan workflow, but nothing has quite fit the bill just yet.

For the short-term, Windows 10 users like myself can use Windows’ built-in backup system with a network share living on a Raspberry Pi. You can also use Resilio Sync or SyncThing to mirror your important files to the Pi.

Posted in Computers and Internet | 17 Comments

Upgrading from Jessie to Stretch

On Wednesday, Aug 16th 2017, a new major Raspbian OS version was released. The “Stretch” release replaced the previous “Jessie” version, and makes a number of changes that may or may not affect you. I’m editing this post as I go through the process of upgrading my servers to the Stretch release, to let you know what I saw.

This will likely be another “live” post for a little while until I’m sure everything is stable, so remember to check back now and then, and please mention any problems you’ve seen in the comments.

Known issues (so far)

This upgrade is not quite ready for everyone to just jump in and do. I’m finding that a few things aren’t working correctly already.

Samba shares

My shares were gone after the upgrade, and even purging and reinstalling Samba couldn’t bring them back. I tried for hours to figure out what the problem was before deciding to go whole hog and do a dist-upgrade, which I normally don’t recommend. A lot of things can go wrong with a dist-upgrade since you’ll sometimes get newer versions of packages that aren’t quite ready for the real world. In this case, it worked. My shares are back online. DO NOT try this without a full backup. You’ve been warned.

Network UPS Tools (NUT)

The first time I tried the upgrade process, I got a rather troubling error message at the end saying that there were errors while processing “nut-client”, “nut-server”, and “nut”, so I gave apt-get upgrade a second pass just to make sure everything else was updated properly. Only these three packages had failed to upgrade, and it appears it’s because the nut-client service failed to restart. This is because the configuration files were overwritten by the upgrade, and didn’t have any useful information in them. After I restored my configuration settings, I was able to complete the installation. See below for more details.

Take a Backup

I should even have to tell regular readers to do this. We’re about to make major changes to the OS itself. You should shut down the Pi, and take a backup of the SD card before going any further. If you’re booting from a hard drive, then you’ll want to attach that to another computer and backup the root partition as well. After all, that’s where most of your stuff actually lives.

Consider upgrading from the desktop

If you can boot to the desktop, then I’d consider doing that. There are a couple times during the upgrade process where I found it convenient to open a second terminal window to examine something the upgrade process was going to change. If you connect through SSH, then you can use multiple sessions to achieve the same effect. If you’re running Raspbian Lite, and only use a directly-attached monitor and keyboard, then you may want to keep a note pad nearby in case you need to take note of proposed config file changes so you can restore your customizations afterward.

Update APT sources

To get the new software packages, APT will need to know where they live first. All you need to do is edit the two “source” files to point to the new “stretch” repositories.

sudo nano /etc/apt/sources.list

Change all instances of “jessie” to “stretch” and save the file. You can do this by hand, or you can let nano do some of the work for you. To do a search and replace, press ctrl-\. That’s the control key and the backslash. You’ll be prompted for the text to search for (jessie), and what to replace it with (stretch). If you haven’t done much to these files, then you should only find two matches, one on the first line, and one on the last line, commented out. Press “Y” for each match, or “A” to replace them all at once.

When you’re done, close the file, saving your changes (ctrl-x, y, enter). Next, do the same thing again, for a second file.

sudo nano /etc/apt/sources.list.d/raspi.list

Change all the “jessie”s to “stretch”es, and save the file. Finally, do a standard update/upgrade.

sudo apt-get update
sudo apt-get upgrade

What to expect during the upgrade

This is a pretty massive update, so you can expect this to take a while. Don’t just leave and come back later though because you’ll be prompted for answers several times during the upgrade process. The first prompt will be to approve the proposed changes, just like every time you do an upgrade, but the list will be huge. I’m talking hundreds and hundreds of packages have updates. You’ll also be prompted when configuration files have updates. I got prompted for changes to the following files

  • /etc/skel/.bashrc
    I’ve never made any changes to this file by hand, so I just took the new version by pressing “y” and then “enter” when prompted. Note, these kinds of changes will default to “n”, so fight the urge to just hit enter like you’re used to. If you’ve made any customizations to this file, then you might consider opening a copy of the file in another window, and then reapplying your changes by hand when the update is complete. See the dhcpcd.conf step below for details.
  • /etc/login.defs
    I took the new version of this file as well since I’ve never touched it myself.
  • Graphical prompt for the keyboard language
    I let this one “guess”, which is the default option.
  • /etc/dhcpcd.conf
    Now this IS a file that I’ve messed with. It’s how you set up static IP addresses on the Pi these days, so I first used the “D” option to examine the differences between the proposed new version and what I currently have. There were changes to things other than defaults; things I had modified by hand. I opted to open a copy of the file in nano from a second command prompt, and then hand apply my customizations when the upgrade completed. Press “y” to take the new version of the file. Don’t forget to come back and apply your customizations later on, though.
  • /etc/lightdm/lightdm-gtk-greeter.conf
    Another file I haven’t touched by hand. I took the new version.
  • /etc/nut/nut.conf and /etc/nut/upsmon.conf’
    I’ve definitely customized these as part of installing the battery backup (see Network UPS Tools), but the customizations aren’t that extensive. I opened a couple new terminal windows, opened the files in nano, and then took the new version of the file (“Y” option). The installation will fail to complete, but once we restore the customizations to these files, you’ll be able to pick back up and complete the process.

    • /etc/nut/ups.conf
      The part you’re interested in is at the bottom, and it’s where you set up the driver for your particular UPS. Mine looks like this:

      [RPHS]
       driver = usbhid-ups
       port = auto
       desc = "CyberPower SX550G"
    • /etc/nut/upsmon.conf files.
      This sets up the UPS monitor that’s in charge of actually shutting down the Pi when the power goes out. There are a few sections you’ll need here.
      The first is the MONITOR section. Mine looks like this:

      MONITOR rphs@localhost 1 upsmon NOTMYREALPASSWORD master

      The second section is the NOTIFYCMD. You will only have touched this part if you set up email notifications for power events. Mine looks like this:

      NOTIFYCMD /etc/nut/upssched-cmd.sh

      Finally, there’s the NOTIFYFLAG section. This tells NUT which power events you’re interested in getting notifications for. Not just email notifications though, this includes “wall” messages. Mine looks like this:

      NOTIFYFLAG ONLINE SYSLOG+WALL+EXEC
      NOTIFYFLAG ONBATT SYSLOG+WALL+EXEC
      NOTIFYFLAG LOWBATT SYSLOG+WALL+EXEC
      # NOTIFYFLAG FSD SYSLOG+WALL
      # NOTIFYFLAG COMMOK SYSLOG+WALL
      # NOTIFYFLAG COMMBAD SYSLOG+WALL
      # NOTIFYFLAG SHUTDOWN SYSLOG+WALL
      # NOTIFYFLAG REPLBATT SYSLOG+WALL
      # NOTIFYFLAG NOCOMM SYSLOG+WALL
      # NOTIFYFLAG NOPARENT SYSLOG+WALL

      It’s not important that your configuration looks like mine, and it probably won’t. The important thing is that you’re writing down, saving off, or opening a second window with your customizations so that we can restore them later on.

  • Graphical prompt for the “/etc/apt/apt.coasdfnf.d/50unattended-upgrades” file.
    I haven’t touched this one either, so I took the new version.

Removing PulseAudio

The Jessie release of Raspbian used the PulseAudio library for Bluetooth audio. If you’re not using it, you can safely remove it.

sudo apt-get -y purge pulseaudio*

Restoring NUT

I decided to take the new version of the configuration files simply because I don’t know what else has changed in the newer versions, and my own customizations aren’t that extensive. Taking the new files will cause the upgrade to fail because part of the upgrade involves restarting the services, but the new configuration files are missing all of the vital information about your UPS. We’ll restore these files one at a time.

/etc/nut/ups.conf

Scroll to the bottom and restore your UPS information from above.

sudo nano /etc/nut/upsmon.conf

Restore the MONITOR, NOTIFYCMD, and NOTIFYFLAG sections from above. Then, we’re ready to take another shot at completing the NUT upgrade. Pick up where we left off with the following command.

sudo dpkg --configure -a

Apt-get is just a polite shell around the dpkg command which is really doing all the work behind the scenes. This command tells dpkg to finish configuring any outstanding packages. You’ll get prompted again to keep or overwrite your files. We’ve already overwritten them once, and then reapplied our customizations, so this time, choose the default option of “N” to keep your configuration files the way they are now, and the installation should complete successfully this time.

Restoring Samba

As I mentioned above, my Samba shares stopped working after the upgrade, and the only thing that seems to have helped bring them back to life is this:

sudo apt-get dist-upgrade

Normally, I’d say don’t do this. I used to do it all the time until I got burned. dist-upgrades are the bleeding edge of upgrades. Not everything has been tested to make sure it gets along well. Most of the time you’re probably okay, but it’s that one time in ten that takes your machine down that you can avoid by only doing normal upgrades.

Checking one last time

Just to be sure nothing got left behind, I did another apt-get upgrade. I noticed a note about packages that were no longer needed. I decided to leave them alone for now. There is also an extensive list of packages that have been “kept back”. You can force these packages to update with a “sudo apt-get dist-upgrade”, but I advise against that. You can read more about it here, but the practical explanation is that dist-upgrade can leave your system pretty broken.

You’re welcome to try it if you’re feeling daring, but I’ve had bad luck with it in the past and generally avoid it. There’s definitely no way you should even consider this without a fresh backup. You’ve been warned.

Changes in Stretch

One that I’ve read about is a change to the way network interfaces are named. From what I’ve read, this only affects new installations, and upgrades will retain their previous naming scheme, so an upgrade should be safe. Previously, you could count on the Pi’s Ethernet port being named “eth0”. Much like hard drives though, if you happened to have more than one Ethernet port, there is the possibility that their names could end up in a different order on any given day. That’s a pretty rare case though. Most Pi’s are only ever going to have the one port that they came with.

Posted in Computers and Internet | 1 Comment