Attach to Process

Thoughts and Notes on Software Development

If you search for “how to delay message processing in RabbitMQ”, you'll most likely run into two possible solutions for it.

  • One solution is to make use of the message TTL argument with additional queues to route messages through. If I understood this approach correctly, you basically route your message to Queue A, where it will sit for some time before it expires and gets moved to another queue, say Queue B. Then you will have your consumer looking for messages at Queue B.
  • The second solution is to use the official RabbitMQ Delayed Message Plugin.

Both solutions presented above are valid solutions, but I ended up not implementing any of those solutions, and instead went with a solution that is configurable via the consumer application. First, my reasons for not going with the established solutions listed above.

  • I did not want to add any more queues or exchanges, especially if their purpose is to just move messages around.
  • The RabbitMQ Delayed Message Plugin as of this writing is still listed as an “experimental yet fairly stable” plugin. The “experimental” disclaimer is a matter of concern to me and I would prefer to wait until it matures enough that it is no longer called as such.
  • Lastly, I really wanted a solution that is configurable via the consumer application.

So, the solution I went with was to add a PublishDate via the message headers and then the consumer can delay message processing based on this date value.

Adding a PublishDate header value is easy, you add it to the Properties.Headers dictionary before publishing the message.

var properties = channel.CreateBasicProperties();
properties.Persistent = true;

properties.Headers = new Dictionary<string, object>();
properties.Headers.Add("PublishDate", DateTime.Now.ToString());

channel.BasicPublish(exchange: "",
    routingKey: "task_queue",
    basicProperties: properties,
    body: body);

Note that I'm adding the PublishDate value as a string, instead of a DateTime value. For some reason, adding it to the dictionary as a DateTime value causes an error. I don't remember what the error was, something about an invalid table value, so I just went with a string value.

On the consumer side, you will need to add code to retrieve the Publish Date from the headers.

consumer.Received += (model, ea) =>
{
    byte[] publishDateHeader = (byte[])ea.BasicProperties.Headers["PublishDate"];
    DateTime publishDate = Convert.ToDateTime(Encoding.UTF8.GetString(publishDateHeader));
    // Now you can delay message processing based on the publish date value

    var body = ea.Body;
    var message = Encoding.UTF8.GetString(body);
    Console.WriteLine(" [x] Received {0}", message);

    channel.BasicAck(deliveryTag: ea.DeliveryTag, multiple: false);
};

Note that I'm first casting the header value to a byte array, before converting it to a string, then finally to a DateTime value. For some reason, adding a string as a custom header turns it into a byte array. Thankfully somebody else ran into this issue before and shared a solution for it.

With a PublishDate value available, you can now delay message processing however you would like. In my case, I opted to compare the PublishDate value to the DateTime.Now value, which allowed me to check how old the message was. For example, if a message was 5 minutes old, it has been delayed enough and gets processed right away. If the message was only a minute old, the consumer thread will wait until such time that the message was now 5 minutes old, before it processes it.

There are some drawbacks to this approach, namely, you will have to go through the Publisher/Consumer classes to add the code for handling a PublishDate header value. Depending on how your queues are structured and how many publisher-consumer class files you have, you could end up with changes to multiple files just to add this feature. On the flip side though, if only one queue needs this “delayed message processing” feature, then you'll have minimal changes while your other queues continue as is. There are probably more pros and cons to this approach that I haven't thought of. Still I prefer the flexibility with this approach as I only must worry about editing a consumer's config file and it allows me to run multiple consumers each with their own specific message processing setting.

Have you had to design a solution to delay message processing in RabbitMQ? If so, I am curious to hear what approach you went with and why. Please do share in the comments below or send me an email and we can discuss.

Tags: #CSharp #DotNet #RabbitMQ

Discuss... or leave a comment below.

Iris Classon wrote a good lengthy post about the history of .NET web development and how it all lead to the development of the .NET Core that we have today. As someone who doesn't get to work as much on the web dev side of things, this was a very informative read for me. I think it is a good read for any .NET developer, so check out her post by following the link below.

ASP.NET Core and .NET Core and the Web Development Stack Timeline

Tags: #Bookmarks #AspDotNet #DotNet #DotNetCore

Discuss... or leave a comment below.

Harder than it needs to be. Or maybe I was just so used to how easy it was to use with .NET Standard apps/projects. Either way I got it to work using this code.

Log4-Net-In-Net-Core.png

I was going to copy the code over to here but it turns out I do not have the time to figure out how to write code on write.as. So you get a picture instead.

#Log4Net #DotNetCore

Discuss... or leave a comment below.

This is a problem I've seen in the past and just recently actually, where exceptions are made harder to troubleshoot, because of the way objects are instantiated. Allow me to explain.

class Program
{
    static void Main(string[] args)
    {
        try
        {
            UserDto dto = new UserDto()
            {
                UserName = "Apatosaurus",
                LastLoginDate = null
            };

            User user = new User()
            {
                UserName = dto.UserName,
                LastLoginDate = dto.LastLoginDate.Value.ToString("MM-dd-yyyy")
            };

            Console.WriteLine("User Info:");
            Console.WriteLine("Username: " + user.UserName);
            Console.WriteLine("LastLoginDate: " + user.LastLoginDate);
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
            Console.WriteLine(ex.StackTrace);
        }
    }
}

class User
{
    public string UserName { get; set; }
    public string LastLoginDate { get; set; }
}

class UserDto
{
    public string UserName { get; set; }
    public DateTime? LastLoginDate { get; set; }
}

Take for example the code above. What happens if the dto.LastLoginDate property is actually null when instantiating a User object?

at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Nullable`1.get_Value()
at NullRefErrorExample.Program.Main(String[] args) in C:\Users\dbansigan\source\repos\NullRefErrorExample\NullRefErrorExample\Program.cs:line 21

The error message above is what was displayed in the console app I was running. Line 21 points to User user = new User(), which is the line of code that was instantiating an object, but not the line of code that caused the exception. It should have pointed to Line 24 instead, LastLoginDate = dto.LastLoginDate.Value.ToString("MM-dd-yyyy"). However, since I followed Visual Studio's suggestion (IDE0017 Object initialization can be simplified), to simplify object initialization, this is what happened. So now you can see how it can make troubleshooting a more time-consuming task.

Note that this is a very simple example. Imagine if you had big class that had like 20 properties and multiple lines of initialization code that could cause an exception? It would be pretty hard to figure out which line of code caused the exception without having to debug the application.

So how do we fix it? Should we even still try to simplify object initialization?

Yes, we can still simplify object initializations, however we just have to be wary of using code that can cause exceptions when assigning property values. So the rule that I follow is that any line of code that could throw a null reference exception, or any exception for that matter, is taken out of the code that simplifies object initialization; they are instead moved to their own line.

So, taking the example from above, this is how I would initialize the object with the rule in mind. I basically just take out the code that assigns the value to the user.LastLoginDate property and move it outside the brackets containing the simplified initialization code.

User user = new User()
{
    UserName = dto.UserName
};
user.LastLoginDate = dto.LastLoginDate.Value.ToString("MM-dd-yyyy");

I run the app once again and this is what the stack trace tells me.

at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Nullable`1.get_Value()
at NullRefErrorExample.Program.Main(String[] args) in C:\Users\dbansigan\source\repos\NullRefErrorExample\NullRefErrorExample\Program.cs:line 25

Notice how the stack trace now points to Line 25 user.LastLoginDate = dto.LastLoginDate.Value.ToString("MM-dd-yyyy");, which is exactly the line of code that caused the exception. So now I get a head start on troubleshooting the exception. Also, any experienced developer will be able to tell almost immediately what caused the exception, if all they had to do was look at 1 line of code.

Are there alternative solutions to this?

An alternative is to check for null before even considering instantiating an object, like in the code below. I think this is a valid solution and it allows you to throw an exception with a detailed/specific message, or throw a custom exception. That said, if you are throwing an exception only because you cannot instantiate an object, then I don't see how it is significantly better than just letting it error out on the line of code assigning the property value. Your application is not going anywhere anyway, since you still cannot instantiate the object you need for your application to move forward.

if (!dto.LastLoginDate.HasValue)
{
    throw new NullReferenceException("dto.LastLoginDate cannot be null");
}

Another possible alternate solution could come from the use of nullable reference types in C# 8. However, this is something I have not been able to play with yet, so I cannot comment on the use of it.

I've seen this problem occur on other types of code as well, like for instance code where method chaining happens. If there is an exception, the end result is the same; you can hardly tell which method parameter, or which method call, or which specific part of the code caused the exception, because they are all essentially just one line of code. In situations like this, it is often advisable to break up the chain of method calls, unless you are absolutely sure that it can never cause an exception.

I'm curious to think if other developers also instantiate objects in this manner to avoid exceptions when creating objects. If you do, or you don't, I'm curious to hear your reasons for doing so. Please do send me a message so we can discuss.

Tags: #CSharp #DotNet

Discuss... or leave a comment below.

Uhm the new Windows Terminal is looking rather... fabulous! Honestly, this looks pretty amazing! It also looks like it will be the only command-line terminal I will need in Windows, as it can also work with Powershell. It even has multiple tab support and emojis, wow!

More info: Introducing Windows Terminal

Tags: #Bookmarks #WindowsTerminal

Discuss... or leave a comment below.

As software developers, one of the problems we have to be mindful of, is our tendency to “reinvent the wheel”. By this I mean the tendency for developers to write a new application to solve a problem, that could have been solved by simply using existing applications/tools. When faced with the problem of importing all ObjectIds from a MongoDB collection into a SQL Server table, I thought I needed to write a new migration/utility app. Turns out I don't need to write any code at all, well except for the MongoDB query of course, but that's besides the point. Here is how I solved this problem using free tools available on the Internet.

Solution

First up is the MongoDB query that will get you all ObjectIds from a collection. I use the free Robo 3T app to query MongoDB.

db.getCollection('Collection').find({},{_id:1});

Robo 3T has a “view results in text mode” option that will display the results in JSON format. Select that and then run the query listed above. You will then be presented with results that will look similar to what I have below.

/* 1 */
{
    "_id" : ObjectId("5cba07dcca67bd08e8a6b3e2")
}

/* 2 */
{
    "_id" : ObjectId("5cba07b4ca67bc33705ded1d")
}

/* 3 */
{
    "_id" : ObjectId("5cba07baca67bc33705ded1e")
}

So now we have a list of ObjectIds from MongoDB. Imagine if you got back thousands of ObjectIds. It would be tempting at this point to say something like, hey I need to write a console app that can parse these results. I know I did. And I did write such an app, but eventually I realized I didn't need to. All I needed was Notepad++. Actually any text editor with a good enough “search and replace” feature will work.

After copying those results from Robo 3T into Notepad++, you can then use the “search and replace” functionality to transform the results into INSERT scripts that you can run in SQL Server.

So first thing you need to do is comment out all those brackets { and }. To do this, you simply search for { or } and replace the values with --{ or --}. Adding -- to the start of a line will comment it out in SQL Server. You can also opt to simply just delete all those brackets. SQL Server won't care if there are spaces between INSERT scripts. At this point your text file in Notepad++ will look similar to what I have below.

/* 1 */
--{
   "_id" : ObjectId("5cba07dcca67bd08e8a6b3e2")
--}

/* 2 */
--{
   "_id" : ObjectId("5cba07b4ca67bc33705ded1d")
--}

/* 3 */
--{
   "_id" : ObjectId("5cba07baca67bc33705ded1e")
--}

The second thing you need to do is to transform the lines with the ObjectId values in it, into SQL Server INSERT statements. To do this you go through two steps:

  1. Search for "_id" : ObjectId(" and replace the values with INSERT INTO [dbo].[MongoDbObjectIds] ([ObjectId]) VALUES ('. (This is assuming you have a MongoDbObjectIds table in SQL Server with an ObjectId column.)
  2. Search for ") and replace the values with ');. This will round out the INSERT statements. At this point, you should have valid INSERT statements that can be used in SQL Server. They should look similar to the ones I have below.
   /* 1 */
   --{
      INSERT INTO [dbo].[MongoDbObjectIds] ([ObjectId]) VALUES ('5cba07dcca67bd08e8a6b3e2');
   --}

   /* 2 */
   --{
      INSERT INTO [dbo].[MongoDbObjectIds] ([ObjectId]) VALUES ('5cba07b4ca67bc33705ded1d');
   --}

   /* 3 */
   --{
      INSERT INTO [dbo].[MongoDbObjectIds] ([ObjectId]) VALUES ('5cba07baca67bc33705ded1e');
   --}

All that needs to be done now is to copy the INSERT statements, run them in SQL Server Management Studio and you are done.

Now it must be noted that this approach to getting ObjectIds out of MongoDB and into SQL Server, is useful only if you are doing it once or twice. As soon as you have to repeat this process multiple times throughout the week or worse, throughout the day, then by all means write a migration/utility app to automate the process. It might take you longer to finish it, but the re-usability of said migration application will pay for itself in the future.

Hope you found this post helpful. Happy Friday to everyone!

Tags: #MongoDB #SqlServer #Scripts #Database

Discuss... or leave a comment below.

Have you ever had to do testing wherein you had to move your system date/time forward or back? If so, you will probably agree that one of the annoying things is remembering to reset the system date/time back to the current date/time. Most people will manually do this, which can be tiring when done multiple times during the day. If you are working in an office environment, then your workstation's system date/time can most likely be synced to a domain controller. Here is how you can easily do that using the command prompt in Windows.

Open up the command prompt and type in the command listed below:

net time /domain /set /y

You might have to open the command prompt in Administrator mode to get it to work.

Taking this a step further, what if you can automatically reset the system date/time on your workstation after a test finishes? I've had to do something similar since I've had to maintain some Coded UI tests, that can change the system date/time as part of their testing. So I wrote a utility method in C# that will reset the system date/time after a Coded UI test ends. This is called via the TestCleanup method that will run after a test ends.

[TestCleanup]
public void CleanUp()
{
    ResetLocalSystemTime();
}

private void ResetLocalSystemTime()
{
    using (Process netTime = new Process())
    {
        netTime.StartInfo.FileName = "NET.exe";
        netTime.StartInfo.Arguments = @"TIME /DOMAIN /SET /Y";
        netTime.StartInfo.UseShellExecute = false;
        netTime.StartInfo.CreateNoWindow = false;
        netTime.Start();

        Task.Delay(1000);
    }
}

Hope this helps some of the devs out there doing some testing. If you have a different way of doing this, do share them in the comments below or share them in a message.

Tags: #CSharp #DotNet #Tests

Discuss... or leave a comment below.

As the title states, you cannot use UPPERCASE letters when naming Containers in Azure Storage. I have no problem with this. My issue is that the exception message that is returned when running into this restriction, does not help me figure out what I did wrong.

This is the exception message I got: The remote server returned an error: (400) Bad Request.

Yeah, no help at all. Thankfully since I was writing the code, I could see where it was running into the exception. So I know it was an issue when trying to create a Container. It would have been a much bigger headache had I been trying to support a library that didn't have good exception logging or show the correct stacktrace.

I can somewhat understand why they did it this way though; it's most likely for security reasons that they are returning a very vague error. Either way, you have to know that there are a bunch of naming rules concerning how to name things in Azure Storage. For the official list from Microsoft, you can head over here.

Tags: #Azure

Discuss... or leave a comment below.

If you've ever needed to get a list of the base classes/types for an object in C#, this is one way of doing it. In my case, I had an object which was of the base class/type, but it was really a derived type.

Example: I had an object of type Animal, but it was really an instance of the Dog class, which is derived from the Animal base class.

The beauty of inheritance in object oriented programming is that the current instance of the object, can be an instance of the derived type, or the base type; it doesn't matter. As long as the code expects you to provide it an object of the base type, you can provide an instance of either one.

I needed to record the type hierarchy for that object when saving it to the database. So this utility method is what I came up with in short order. I am returning a list of strings in my example below, but there is nothing stopping you from returning an array of Types or whatever else you may need based on your scenario.

private List<string> GetTypeHierarchy()
{
    List<string> typeHierarchy = new List<string>();

    Type currentType = this.GetType();
    typeHierarchy.Add(currentType.Name);

    Type baseType = currentType.BaseType;
    while (baseType != null)
    {
        typeHierarchy.Add(baseType.Name);
        baseType = baseType.BaseType;
    }

    typeHierarchy.Reverse();
    return typeHierarchy;
}

I intentionally wrote this without recursion, because recursion hurts my head haha. There might be a better/cleaner way of doing this, if so, do share your solution with me.

#CSharp #DotNet

Discuss... or leave a comment below.

Recently I ran into an issue where I needed to exclude a property from getting serialized using Json.NET. The easy answer is to add a [JsonIgnore] attribute to the property. The problem with doing that is it will also ignore the same property during deserialization. So I needed a solution that allows me to ignore a property using serialization, but still set that property's value during deserialization. Thankfully I found a blog post from 2013 that explains exactly how to do that. I would have wasted more hours searching for an answer had I not found this solution right away.

There's a little known feature of Json.NET that lets you determine at runtime whether or not to serialize a particular object member: On the object you're serializing, you have to define a public method named ShouldSerialize{MemberName} returning a boolean value. – Marius Schulz

Visit Original Post: Conditionally Serializing Fields and Properties with Json.NET

It was only after I found Marius' blog post that I then found the documentation talking about conditional property serialization on the Newtonsoft website.

This is one of the rare instances where I didn't find the answer in StackOverflow. It makes me grateful for the developers who are still cranking out blog posts and sharing solutions to problems on their personal blogs/websites.

#Bookmarks #JsonDotNet #DotNet #Serialization

Discuss... or leave a comment below.

Enter your email to subscribe to updates.