Eventually, we turned on blob snapshots which, instead of replacing a blob with a new blob on every write, makes a copy that you can promote at a later time.
This week, we had a production issue where a blob had 0 bytes. We hadn't seen it in so long, we secretly hoped the problem was magically fixed by someone else.
After promoting the previous copy, which unblocked the issue, I stared in frustration at the code, not understanding how we were writing to a stream with 0 bytes.
I probably spent an hour tracing through code and found no place where we were doing anything that would cause this issue. So I decided to take a walk... to the kitchen. There, I sat down with our CTO and described the situation. We started talking through scenarios of how this could happen. Maybe this was a bug in Azure blob or the SDK. Maybe it was our code. Maybe we were somehow purging the stream buffer.
After 10 minutes of ideas, we went back to my machine and started to take a closer look at the issue. First, we noticed the timestamps. We audit a lot of things in our system and we had an audit that occurred just before the time we wrote the 0 byte document. We knew what time the write occurred because of the timestamp on the document from the Azure Portal.
Working our way backwards, I filtered the logs looking for errors that may have occurred before the timestamp or just after it. Then I saw a null reference exception in our logs just a little bit before our successful audit. The top of the stacktrace showed the null reference was actually coming from our IOC container attempting to inject a dependency. That was bizarre. We hold on to the containers for the lifetime of the service and that could be days. Even still, that shouldn't have had anything to do with writing a 0 byte document.
However, in an attempt to squash any leads, we dug a little deeper into the surrounding code where the exception was thrown.
A few layers above where the IOC call was eventually made, we see that we are attempting to get an instance of a class that helps us manage encryption. It was that class that was receiving the null reference exception and we happened to be doing that just before we write our data to the blob document.
That shouldn't have mattered because we had code like this:
1 | using (var stream = await blob.OpenWriteAsync()) |
We call OpenWriteAsync
which returns a CloudBlobStream
which inherits from Stream
. We do some encryption stuff and then we write the data to blob. The "do some encryption stuff" is what was failing. Since this was wrapping in a using
block, that means the exception was actually thrown in the compiler generated try
block and then the finally
block calls Dispose
on the CloudBlobStream
because it ultimately implements IDisposable
.
We dug a little deeper into what Dispose
was doing on the CloudBlobStream
: it ends up calling Commit
which, as you can guess, commits the data in the stream to blob. But, at this point we hadn't written any data. So it was actually committing an empty stream which created a 0 byte blob document.
But why were we through that exception to begin with? Well, we DO dispose the container when the Cloud Service instance is shutting down. So, that means we have to start shutting down a worker role (which is done via autoscaling or deployments) and begin processing a new message from our queue infrastructure within a very tight window. Then we will attempt to create a new encryption helper instance at just the right time before the role is down and that will lead to the disposed container which causes the exception.
That, in of itself, shouldn't be a big deal. Our message goes back into the queue because it couldn't finish since the machine shut down. However, and without going too much into detail, we need to read data in the blob in order to know how we need to modify it. That means when we try to reprocess the message, it fails again because we don't have any of the data in the document that we should.
There are a couple of immediate takeaways from this that we are working through. First, we shouldn't have been doing anything inside of the using
block other than what was purely necessary. We didn't need to do the encryption stuff in the using. If we hadn't we wouldn't have had an exception in a place where ultimately a commit would be called.
Second, we are considering putting the writes to the document in a separate message that isn't dependent on reading the document first. This would have let us replay the message and work the second time around.
To get around the issue right now (before we break things apart), we removed the using
block all together and simply call Dispose
ourselves when we are done. We've also removed anything between getting the stream and using the stream.
Simple scenario: You talk to another service that uses Azure AD. For development, you want to use a stub service that returns things and does things similar to the real one in Azure. Doesn't matter what it is.
Some people may use IoC to handle this. Sometimes, that requires config switches. However, what if you want to be 100% sure that code can never be accidentally turned on in production? You may consider using a compiler directive.
1 |
|
Depending on the build mode, one of those methods will not be make it to the assembly. This prevents any accidents that could do the unsafe thing. Yes, there are other ways of doing this and typically those ways require process and convention. No, this is not fail safe is technically someone could accidentally change the build mode to DEBUG for a production release (sure, whatever).
1 | class Program |
There's are problems, though, that can arise from using these directives. First, if you have your environment set to DEBUG and this is the only spot you use the DoItTheSafeWay
method, looking for any usages using your IDE will result in ZERO INSTANCES! NONE! You'll spend 45 minutes trying to figure out how this thing is done in production because, like any normal person, you're using the Find References in your tool.
But NO! You won't find it. Your IDE simply laughs at you while you struggle knowing it must be used somehow. The IDE knows what's going on. It knows what you want but it decides to continue to hide this from you. So, you end up doing a damn regex search among all the files. The IDE knows it has be caught red handed trying to sabotage you and surfaces the files for you while sheepishly blaming Resharper for performance problems. I call it BS2019 for a reason (not always, generally I like VS).
The other problem is your trusty IDE will tell you that certain using statments are not being used and you should delete them or it will remind you with grayed out text or a colored dash on the scroll bar. You delete them, commit, push, and then find out the build failed because, in release mode, they are being used...
So, use the following code (or something similar):
1 | public class RunMode |
And then use it like this:
1 | class Program |
Your experiences will vary, but reports of using this code show it has saved marriages, increased gas mileage and prevented the death of at least 2 dozen water fowl.
]]>Using an Azure function, this can be done two ways.
1 - Check for the X-ARR-ClientCert request header and, if present, base64 decode the value and load it into a X509Certificate2. From there, you can check the thumprint to validate the client is correctly sending the certificate with the request.
2 - Get the request context and check to see if the ClientCertificate is null. If it's not then check the thumprint.
I chose the second way for one single reason - I did not know about the first way. So, if you choose the second way you'll need to make a setting change to allow the certificate to be passed in with the request (instead of as part of the request header).
Go to the SSL settings of the function app.
Enable the Incoming client certificates
flag.
Here's some code:
[FunctionName("Function1")] public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log) { var clientCert = req.GetRequestContext().ClientCertificate; if (clientCert == null) { return req.CreateResponse(HttpStatusCode.BadRequest, "There's no client certificate"); } log.Info($"Client Thumbprint: {req.GetRequestContext().ClientCertificate?.Thumbprint ?? "No cert found."}"); return req.CreateResponse(HttpStatusCode.OK, $"Thumbprint: {clientCert.Thumbprint}", new JsonMediaTypeFormatter()); }
Boom. Done. All in all, this took about 8 minutes to do (including creating the function app) and it saved me from mucking around with my machine, generating a cert, configuring the web server etc., and now others on my team can use it.
Using the second way gives an added benefit of forcing all requests to include a client cert. So, if your app immediately gets rejected, you know the cert isn't even being loaded.
]]>But that's too much of an obvious statement to mean anything.
I don't know everything and I'm ok with that.
Over the past decade of software development, I've created opinions of my own rather than regurgitating the opinions of my mentors, blogs I've read or books I've skimmed. I don't mean to use "regurgitating opinions" in any negative connotation. When I first started in this career, I didn't know anything and I looked to my mentors for guidance and advice. I looked to them for my opinions. When they were disgusted by SOAP interfaces, so was I. When they said guids should never be used as primary keys, I believed them and never tried to use them. But eventually, someone would challenge my "beliefs" and I would either:
The first part of my career was centered a lot around number 1, but I've worked really hard the past several years to move my ego out of the way in order to grow.
It took me a while to be honest with my self and realize that I took people's challenges very personally at times. Digging deep, I realized I felt it was almost an attack on my competency, as if they were denouncing my experience or skills. Couple this with the fact that I'm a two time college drop out and my ego suddenly became very brittle.
There was one time, in particular, when I was first leading a team. A junior dev would ask me random trivia questions about .NET or C# to see if I knew them. Some of them I didn't know and he would proudly tell me the answer. To me, that was challenging some pseudo authority I had granted myself. I felt as if he was asserting his dominance over my informal education or trying to show that I wasn't qualified to be where I was. This, in turn, made me dismissive of his input or propel me to be extremely critical of his ideas and approaches.
Damn, even as I type this out, I'm still ashamed of how I felt back then. It was pathetic and my heart hurts when I think about my son and daughter learning how fragile daddy's ego can be.
It took me a long time, and a lot of introspection, to realize that he merely wanted to impress me. Yeah, I felt like a big douche canoe afterwards. Instead of challenging me, he saw me as an authority and wanted to try to prove herself against a higher bar she set. A bar she set with me in mind.
I've got many stories of how my ego precluded me from adding value or cultivating deeper relationships with those around me. Thankfully, I know myself better now and I'm better equipped to handle it. Every now and then, however, I can feel myself slip into that deep abyss of self-doubt and sulk at the sunken pillar constructed of my fears and failures. This can cause me to lash out to those around me and manifest in ways that people can't really see.
What's interesting, though, is since I'm a very sensitive person, I'm often acutely aware how I come across to others. Once I realize I allowed a vein of fragility to infect my confidence and composure, I'll go back to the person afterwards and apologize. However, they are typically completely oblivious to the internal Goliath I was facing at the time and had no reason to believe waters beneath didn't match the calm, glass like surface.
I've gotten a lot better at this over the past 7 years; it's been a real focus of mine. When I feel myself hardening within a shell of pretentiousness and entitlement, it's a sign to me that I need to humble myself and remember that I'm not playing some zero-sum game where if someone else wins then I lose. I haven't opened up to many people about this struggle. I'm writing this post as a cathartic means of freeing myself from those chains in hopes others can tell me their experiences and tell me if they struggle with their own ego at times as I do.
However, even if no one agrees and all I get is an inbox full of "don't be a pretentious douche canoe", I'll still be content. This career I've chosen is starting to become far less about software and far more about value. Don't get me wrong, I feel very competent as a software developer. But I'll admit when I don't know something and I'll be the first person to ask you what the acronym means that you said in a side remark discussing your project's problem.
]]>The basic idea is that I, as I'm sure others do, learn best after making mistakes. I can be told the right way to do something and I can follow the approach, but it never really sinks in until I see what happens when I don't do it. The reason I care about things like dependency management or domain models is because I've felt the pain of not using them. It's the pain that drives me to do better. It's the pain that gives me an opportunity to improve. Were it not for the pain, I'd never want to change. For me, I have to make mistakes before I can grow.
MDD doesn't stop with my technical skill sets. When I married my wife, we were both pretty young (21). I hadn't had a chance to really get myself together and now I had to be a husband. I made several mistakes (we both did but I'd never tell her that) and I felt pain. Sometimes I felt that pain several times because I can be slow learner. But eventually, I find that I want to stop feeling the pain enough to change my ways and that's always when I grow. We're still young and have a lot of years to continue to grow, but I can at least look back as we clear our first decade and see improvements.
I've been guilty of trying to anticipate what my client wants to hear and craft a pleasing response that orbits the truth instead of landing on it. This most always manifests itself in over promising. This is an area of particular interest for me. I've felt the pain of over promising (often in the way of low balling estimates) time and time again. I've walked that emotional path so many times that my feet would take me there without any conscious effort on my part. In other words, I was so good at doing it that I sometimes didn't realize I was doing it until it's too late.
One mistake started out the same way. A client was pressuring me for an estimate. I gave one and she didn't like it. So we "negotiated" until I walked out of the room with a familiar sense of foreboding. As a human, I suck at estimates in general. Put me under the pressure of some forced negotiation (whether real or by my own imagination) and by the end of it I've probably blacked out halfway through and temporarily made my client happy at the expense of several future sleepless nights.
Before I go any further, in no way am I attempting to blame a client for my lack of directness. A client's job is to maximize value for her company which includes motivating those who work for/with her to deliver quality content quickly. As an executive for the company, she is expected to drive success hard.
I developed a theory for why I've done this. I think I subconsciously simulate the client interactions with a legitimate, variable estimation. For example, maybe the simulation starts with an estimation I feel comfortable with. If I think she'll blow up, I'll re-estimate the work with the goal of reducing the length of time required. The problem is that this distorts my vision to the point that I began to overestimate my capacity or ability. I also see this as a self fulfilling prophecy of sorts. If I'm too afraid of what the client will do when I say 6 weeks, I'm going to justify a way to myself to say something less.
This will lead to a new estimate of 5 weeks. She still won't be happy but I'm sure if we work really hard we can possibly get it done in 4 weeks. So I can just tell her 4-5 weeks. Cut to me delivering the estimate and the only thing the client hears is "4 weeks". Now she doesn't know that I've already tried to cut through the "what if things go perfect" scenario and thinks she can make me work harder to get it done faster by imposing a deadline. Before I know it, I'm walking out of the meeting and the client has used her knife to carve a giant X on a delivery date three weeks from now... Mother Francis.
So what happens next? I get myself and the team pumped up all the while hoping my face doesn't betray my confident facade. The familiar anxious feeling sits in the bottom of my stomach as if I had extra servings of stone soup. For the next two weeks, I ignore the signs while confidently thinking that we are going to deliver a miracle (with a few swishes of pepto-bismol added for good measure). The last week rolls around and it looks like we are really going to do it. But then something happens, like it does every time. Something happens in the story that we didn't foresee, production blows up and pulls half of the team away, or the client introduces a "simple" change.
Two or three days before we reach the giant X on the calendar, I realize it's hopeless. I began to think that if I work the next 72 hours, ignoring distractions like sleep, food, or time with my beautiful wife and kids, we might have a 30% chance of making.
Reality quickly sets in and I began customizing my most dreaded email template. You know what it is: the "we need to delay our release" one.
I eventually got tired of this. I was ready to start making changes that would help prevent or control situations like this in the future. I had felt the pain enough now that I was ready to do something about it. This was a great opportunity to let my mistake drive my own development.
Note: my point isn't to make perfect estimate, I don't even think that's possible. My point is to work to create more realistic estimations all the while being honest with myself and my client.
So here are some things I started doing:
When I was a kiddo, if I ever lied to my mother, I would be punished twice as hard (doing the bad thing I lied about + lying). If I have tough news for a client, I embrace the suck and set the expectations from the beginning. Over time, I learned techniques of delivering bad news in ways that weren't so terrible including presenting options for remediation and giving the client a chance to make a business choice regarding the matter.
This seems like a very obvious thing, but the problem is that I would forget about my previous mistakes and over estimate my own ability. Even if the scope is extremely well defined and I know the code base like I know my refrigerator, taking a moment of pause will allow me to clear my mind and provide time for historical reflection. Afterwards, I'll approach the client with an estimation completed free from pressure.
If my physics classes taught me anything, it's that there is a perfect world that exists where there is no air resistance and all cows are spherical. Also, in this perfect world, nothing unplanned ever happens. My team suffers no illness, family emergencies, or destroyed laptops. We make no incorrect assumptions and introduce no bugs and every solution comes to us immediately without needing to ponder it for days. Maybe this world does exist... however, it's just not the world we live in.
The only constant is change. Life is unpredictable and I need to remember that when I'm thinking about how much time we'll need to complete something. While my client might be sympathetic to one of my teammates needing to take a week and half off because of a death in the family, she doesn't want to hear that as an excuse for why we are late. My estimates need to take into account the unknown. Some people call this padding, I'm calling it being realistic.
Sometimes, I'll think that I know the problem and solution set so well that I don't need to dig deeper. Making estimates within a really short period of time is like trying to quickly eat a loaf of bread with a rock hidden inside. When you find that damn rock, it's going to hurt like hell.
I need to stop thinking I'm always the right person to make the call on how long something will take. I might be the lead on the team, but I'm not the best. It's not my job to know everything. It's my job to utilize my teammates' strengths to create a cohesive tandem of individuals. We should be estimating as a team, not just me. Again, this seems very obvious, but I'm admitting to making this mistake.
Maybe I really can do something in 3 days. But does that mean everyone on my team will take the same amount of time? I can crank out some front end code pretty dang quickly. But you need me to write a complex SQL query? Hello Google (ok, really it's StackOverflow). Another person on my team might happen to be the person who has to do some CSS work and she might not be very good at it. 79.1% of all developers are intimidated by CSS... yes, I just made that up.
We suck at estimates, but we suck gloriously worse the larger the workload is. Breaking the work down into smaller items, taking the aggregate then applying overall ranges has been much more effective for me.
This isn't an exhaustive list by which I use to do better; it's just a start.
I love Scrum and Kanban (each in different scenarios) but when I'm working on an estimate for a client who wants to know how much something is going to cost before they sign the statement of work, sometimes you've just gotta estimate.
Maybe some of this resonates with you... maybe not. In the end, this is just me pulling back the curtains and showing how I took some mistakes I made and turned them into growth opportunities.
]]>Open Visual Studio. Click on the Resharper
menu item, navigate to Tools > Template Explorer, then click on the new template button (see image);
In the editor, paste this: //TODO : $user$ $date$ $description$
On the right, you'll now see three parameters. For user
, click "choose macro", and the select "Full user name of current user".
For the date
parameter, click "Choose macro", and then select "Current data specified format". In the format box, type "MM/dd/yyyy".
Uncheck the editable
checkboxes for user
and date
.
Lastly, in the "Shortcut" box, type "todo" and name it "todo helper".
Ok, save and make a new one for "hack" comments with "//HACK : $user$ $date$ $description$"
Now, you should be able to go into your C# class files and do this:
Cool, huh?
Ok, now let's make a unit test help with this:
[Test]public void $methodName$(){ //Arrange $END$ //Act //Assert}
I use the shortcut nut
for "nUnit Test". If you use other testing frameworks, just modify it to suite your needs.
Save it and now add tests like this:
Pretty sweet, right? Life changing? Maybe.
]]>Let's say you work for a company that sells dog shoes online. Thinking about it, that's a dramatically under served market.
Currently, your company's website allows for users to pay with their credit card and then, hopefully within a few days, receive their shoes. So let's take a look at some sample code for the handler that processes the message for payment.
public class ProcessPaymentHandler : IHandleMessage<ProcessPayment>{ public void Handle(ProcessPayment message) { paymentProviderClient.Charge(message.paymentData); //save and log response }}
Ok, obviously a simple example but you get the idea.
Everything is working well and your client's canine pals are having their paws covered in stylish footware.
Now your product owner comes to you with a new requirement: send a confirmation email once the payment has been processed. This is easy enough. Let's just have the ProcessPaymentHandler send a command to send the confirmation email.
public class ProcessPaymentHandler : IHandleMessage<ProcessPayment>{ public void Handle(ProcessPayment message) { paymentProviderClient.Charge(message.paymentData); //save and log response var emailAddress = customerRepo.GetEmailByOrderId(message.OrderId); bus.Send(new SendConfirmationEmail(emailAddress)); }}
This is a common, albiet naive, approach to the problem. It will work, assuming we have a handler for the SendConfirmationEmail message to be received by, but there are some problems with it.
The first problem is now the code that handles processing a payment has a dependency on the process that sends emails. This single line of code may not seem like a dependency problem, but maintaining code and clean architecture is a lot about managing dependencies. Introducing the command here forces the host of this handler to know about the location of the email handler.
There's also a deployment dependency. We now have to keep this handler in sync with the current version of the email handler.
If the message interface changes because the handler was expanded or for any other of a multitude of reasons, we now have to come back and change code that handles processing a payment because some other code related to an email has changed (admittedly, though, we could effectively manage different versions in messages). Which leads us to the next problem...
It's a violation of the Single Responsibility Principle (SRP) which basically means that a piece of code should have only one reason to change. The class is currently supporting two requirements (processing a payment and sending an email) therefore has two reasons to change.
In order for us to add a new feature, we had to modify existing code. Sometimes that's inevitable, but sometimes it's a sign of a series of preceeding bad design choices. When that happens, it is a violation of the open/closed principle. This principle states that you should be able to extend functionality without having to modify the internals of existing code.
What we want is the ability to complete the feature for sending the email without having to modify the existing code.
If the ProcessPayment Handler published an event once it was done, then the Email Handler could subscribe to the event and take the appropriate action. This allows the payment processor to continue on its merry way being none the wiser that any process cares about it.
Here's the code for that:
public class ProcessPaymentHandler : IHandleMessage<ProcessPayment>{ public void Handle(ProcessPayment message) { paymentProviderClient.Charge(message.paymentData); //save and log response bus.Publish(new PaymentProcessed(message.OrderId)); }}
In this code, we removed the line getting the email address and the code to send a new SendConfirmationEmail command.
It's pretty clear why the first line was removed. Since we aren't sending the command, we don't need to find the email address.
The second line, however, has some subtleties that could be missed.
The command was "sent" while the event is "published". Commands can be sent from N number of hosts but they are "sent" to a location because that location is always known. If a service has the contract and the correct queue, it can send any command it wants to. This means, however, that the service is now coupled to the processor of that command; being aware of its very existence is a coupling.
However, events are published from one and only one logical host but can be received by N number of hosts. Other services can subscribe to those events without the publishing service being aware of it. This inverts the coupling the other direction. The service that needs to do the action is now coupled to the service that publishes the event. The coupling here makes sense. In our case, the email service wants to know when it needs to send the confirmation email. So, we can allow it to couple to the PaymentProcesssor service.
If you are still not quite groking events vs commands, try this:
Commands are like email. You know who is going to read it and you know where it is going. You send the email to one person with the expectation that they will read it and act on it.
Events are like this blog post. I have no idea if anyone will read it, who that person is or where they are located. I put it out in case anyone is interested in my data.
I want to reiterate something really quickly: Anyone can send a command, but there must be only ONE service that handles it. Anyone can subscribe to an event but there must be only ONE service that publishes it.
The event is named as a past tense version of the command it was being published from. This is a convention I pretty much always use when naming commands and events. The commands are imperative. They represent actions your services can do and generally found in your ubiquitous language. The events are past tense. If your command name is "DeleteAccount" the event would be "AccountDeleted".
Here's some sample code for handling the event:
public class PaymentProcessedHandler : IHandleMessage<PaymentProcessed>{ public void Handle(ProcessPayment message) { bus.Send(new SendConfirmationEmail()); }}
You may have noticed I'm sending a command from this event handler instead of just doing the work. There is a reason and I'll get to why I did that in another post.
Up to now, all we've really done is changed a command to an event and moved some logic to the event handler, which then delegates to another command handler. So where's the power in that?
Cue the Product Owner
Now we have some new requirements. Once a payment has been successfully processed, if this is a first time customer then the company wants to send out a special dog treat to the customer to give to their canine companion as a thank you for their business. So let's add that capability.
If we didn't have events, we would need to modify the existing code for processing a payment and have another command sent (which introduce more of the three problems from earlier). However, since we have events, all we need to do is let this catalogue service subscribe to the PaymentProcessed event and do its thing. This means we don't have to modify ANY code in the Payment Processor.
public class PaymentProcessedHandler : IHandleMessage<PaymentProcessed>{ public void Handle(ProcessPayment message) { bus.Send(new SendPhysicalCatalouge()); }}
We just extended the application without modifying any existing code. That's the power of using events. If the company decides they also want to add the customer to a list for someone to call and thank them personally, we could subscribe to the event again. If the company decided they no longer wanted to send dog treats, then we simply unsubscribe to the event.
All of this is done without redeploying the current, existing code (PaymentProcessor).
When you add a subscription to a host, NServiceBus actually sends a message from the subscribing host to the publishing host. This informs the publishing host that the subscribing host wants a copy of the event when it is published. This gets stored in whatever persistence you previously chose: (Azure Storage, SQL, MSMQ, etc). This is true for all persistences except when you are using Azure ServiceBus or RabbitMQ because they both native pub/sub capabilities and hold onto the subscription data.
In order to allow for extensibility and prepare for future features, every command should have a corresponding event to go with it. With NServiceBus, if no one has subscribed to the event, then nothing will happen so there's no overhead of adding the events to the handler.
]]>