Source control is an essential aspect of software development, allowing teams to collaborate and manage the changes made to source code over time. However, not all source control systems are created equal. One system that has long been used by many development teams is Microsoft’s SourceSafe. While this system has served its purpose in the past, it’s time to say goodbye to SourceSafe and embrace a more modern source control system - Git.

One of the biggest risks of using SourceSafe is the potential loss of source code. SourceSafe is notorious for its instability and lack of reliability. It’s a common occurrence for users to experience database corruption, which can result in the loss of entire projects or portions of the codebase. This is a huge issue for businesses that rely on their source code to drive their products and services.

SourceSafe’s database corruption is not an isolated issue, it happens frequently. When a database gets corrupted, the contents of the codebase can become altered or even disappear completely. This can set a project’s timeline back significantly and have a negative impact on the bottom line. In some cases, it can even mean losing years of hard work and valuable code that cannot be easily recreated.

In contrast, Git is a distributed version control system that is known for its reliability and stability. Git provides a robust and scalable solution for managing source code, and it is widely used by development teams around the world.

If you’re still using SourceSafe, it’s time to make the switch to Git. But how do you get all of your source code from SourceSafe to Git? This is where Castellum comes in. Castellum is an awesome migration tool that makes it easy to move your source code from SourceSafe to Git. With Castellum, you can easily import your entire SourceSafe database into Git, preserving all of your history, tags, and branches. Castellum even provides a visual comparison of the source code before and after the migration, so you can be confident that nothing was lost in the process.

Castellum is user-friendly and easy to use, making it the perfect solution for teams who want to make the switch to Git. With Castellum, you can quickly and efficiently migrate your source code from SourceSafe to Git, and enjoy all of the benefits that Git has to offer.

In conclusion, the dangers of using SourceSafe cannot be ignored. With frequent database corruption and the real risk of losing valuable source code, it’s time to make the switch to Git. With Castellum, you can easily and efficiently migrate your source code from SourceSafe to Git, and enjoy all of the benefits that Git has to offer. Don’t let your business be vulnerable to the dangers of SourceSafe - make the move to Git today.”

Intro

I’ve been working in a multitenant application where some services need to know what’s the current tenant for each request. It would be possible to identify the tenant in upper layers (controllers) and pass that information all the way down to the services, but that generates a lot of repetitive/polluted code. A natural idea is to think about using Dependency Injection to pass this information around.

Injecting Runtime Data into Services

Immutable data (like connection strings or any other configuration data that is used during app initializatio) are things that certainly need to be injected, and are NOT considered runtime data.

Runtime data is information that may change across requests, like current logged user, or current tenant (organization).

Using dependency-injection to inject runtime data into your services is a common but controversial practice. The argument in favor of injecting runtime info is that it can improve code readability/maintenance by avoiding having to add a parameter all the way up the chain (in all direct and indirect consumers).

This is an example of how we can inject runtime data into our services:

// Registration
services.AddTransient<ClaimsPrincipal>(s =>
    s.GetService<IHttpContextAccessor>().HttpContext.User);

// Usage
public class MyService
{
    public MyService(ClaimsPrincipal user)
    {
        // now you have user.Identity.Name;
    }
}

In the example aboveClaimsPrincipal is calculated by ASP.NET Identity middleware (it’s a cheap call based solely on cookies) and it’s saved into HttpContext.User. So whoever calls this service can just invoke it without having to pass the current logged user.

Using the same idea it would be possible to add to the pipeline a middleware to resolve the current tenant and make it available in HttpContext:

app.Use(async (context, next) =>
{
    var context = context.RequestServices
        .GetRequiredService<IHttpContextAccessor>().HttpContext;
    var tenant = await ResolveTenantAsync(context);
    context.Items["Tenant"] = tenant;
    await next.Invoke();
});

// Registration
services.AddTransient<ITenantContext>(s =>
    (ITenantContext) s.GetService<IHttpContextAccessor>().HttpContext.Items["Tenant"]);

The first problem here is that this approach forces all requests in the pipeline to resolve a Tenant (which might be expensive), so unless you really need it in all requests (or unless you can decide based on the request if you need it) then I think it’s a better idea to just rely on DI and let the container resolve your tenant only when required (and cache it), instead of resolving in the request pipeline.

The other problem here is that your runtime data have a short lifetime and you don’t want to inject that by mistake into objects with a longer lifetime (like singletons). In other words, if MyService is a singleton then it would be storing internally the identity of the user making the first request, and all subsequent requests (from other users) would be associated to the wrong user, so you will have to correctly (re)define the lifetime of your components.

So injecting runtime data is a bad idea?

Many developers believe that you should not inject runtime data to services. This article explains that components should ideally be stateless, and that runtime data should not contain behavior and should be passed across methods (instead of injected into constructors).

But then the same article explains that passing runtime data across a chain of callers can pollute the code, and shows examples of components that can benefit from having runtime data injected into it (Repository and more specifically Unit of Work).

Injecting runtime data is especially helpful for lower layers (like data-access) because the lowest you go the more layers you would have to modify just to pass repetitive/boilerplate data (all the way up to the layer where the runtime data can be obtained).

To sum we want a solution where we don’t need to pass repetitive runtime data through a bunch of layers but where we don’t have to directly inject transient runtime data as it would affect the lifetime of our components.

Adapters

Those problems can be solved writing an adapter to resolve your runtime data (e.g. identify your tenant). The lifetime mismatch is now handled in a single place:

public class AspNetTenantAccessor : ITenantAccessor
{ 
  private IHttpContextAccessor _accessor;
  public AspNetTenantAccessor(IHttpContextAccessor accessor)
  {
    _accessor = accessor;
  }

  // or maybe Task<AppTenant> GetCurrentTenantAsync()
  public AppTenant CurrentTenant
  {
    get
    {
      // resolve (only when required) based on _accessor.HttpContext
      // cache it (per-request) in _acessor.HttpContext.Items["Tenant"]
    }
  }
}

// Registrations:
builder.Services.TryAddSingleton<IHttpContextAccessor, HttpContextAccessor>();
builder.Services.AddSingleton<ITenantAccessor, AspNetTenantAccessor>();

Now in your service layer you can just inject ITenantAccessor and get the current tenant (or any other runtime data), and we won’t have runtime dependency-resolution problems (we don’t need to test if all different consumers are correctly resolving the current tenant).

You may be wondering: Why can’t you just inject IHttpContextAccessor and get the current tenant directly from HttpContext.Items["Tenant"]? That’s because it would be a leaky abstraction - you don’t want your services to be tightly coupled to this Http abstraction (it would be harder to reuse them from other callers like unit tests).

So is that the “right way” of doing it? What does “Microsoft tells us to do”?

In 2015 David Fowler (from ASP.NET Core team) showed an example of how to Inject ClaimsPrincipal in Service Layer by using an IUserAccessor adapter (exactly like our example above):

public class UserAccessor : IUserAccessor
{ 
      private IHttpContextAccessor _accessor;
      public UserAccessor(IHttpContextAccessor accessor)
      {
           _accessor = accessor;
      }
      public ClaimsPrincipal User => _accessor. HttpContext.User;
}

That example alone might be interpreted as if injecting transient runtime data is just plain wrong, but then more recently in 2021 someone suggested that ClaimsPrincipal should be made available (as injectable) in ASP.NET Core Minimal APIs (avoiding dependence on IHttpContext(Accessor), and the same David Fowler accepted that suggestion and the PR that registers ClaimsPrincipal / HttpRequest / HttpResponse (they are all transient runtime data) - so this is a hint that the “don’t inject runtime info” rule is not set in stone, and might be just a design choice.

Configurable Pipeline

So now I have an ITenantAccessor interface, and I can switch between multiple implementations.

For ASP.NET I could have an accessor that would identify tenants by the hostname, for local (localhost) debugging or for tests I can use a different accessor that would fake my test tenant.

But then there was another requirement: tenants will be usually identified by their hostnames, but in some cases the same hostname will be shared among all tenants and they should be identified by the request path or by some request header or cookie. So I wanted a configurable pipeline where I can eventually add or remove some steps, and more importantly: I don’t need to run all steps - each step may short-circuit the next steps.

The Pipeline Design Pattern is a common design-pattern where we have a Pipeline with a list of steps and they are processed in order, usually one task builds upon the results of the previous task. One nice implementation of this pattern is the ASP.NET Core Request Pipeline, where each step creates (and runs) a Task that receive the next step as input and it can can short-circuit the request (skip the next steps) or it can proceed to the next step in the pipeline. I was familiar with using this request pipeline (e.g. adding middlewares) but I hadn’t had a chance to look how it works internally, so I decided to make something similar, which led me to this StackOveflow answer that led me to the source of Microsoft.AspNetCore.Builder.ApplicationBuilder.Build()

Basically this is how it works:

  • Each step in the pipeline is a Task, but they are registered as a factory Func that get the next Task (if any) and create the step Task
  • Since those Func factories always get the next pipeline step (the next Task) each Task can decide to run the next step or skip it (short-circuit).
  • The Build() method starts by creating the Task for the last step, which is a fallback step to check if things went fine or if there is a misconfigured pipeline. Then it goes in reverse order through the previous pipeline steps, chaining each step with the subsequent step. The last Task created in the loop is actually the first step where the pipeline starts, and that’s what’s returned by Build().
  • Tasks from ASP.NET request pipeline don’t return anything and besides the next Task they also get HttpContext as input.
  • In my case I don’t get any input but each step can return a Tenant.

Based on that code I’ve created a quick implementation to resolve tenants using a generic/configurable pipeline:

public class PipelineTenantResolver<TTenant> : ITenantAccessor
{
    // Steps from my pipeline don't get anything, 
    // and may short-circuit to return a TTenant
    public delegate Task<TTenant> IPipelineTenantRepositoryDelegate();

    // Steps from ASP.NET pipeline (for comparison) get HttpContext 
    // but don't return anything:
    //public delegate Task RequestDelegate(HttpContext context)

    protected List<Func<IPipelineTenantRepositoryDelegate,
        IPipelineTenantRepositoryDelegate>> _steps = new();

    // After Build() we cache pipeline first step
    protected IPipelineTenantRepositoryDelegate? _built;

    public void Use(Func<IPipelineTenantRepositoryDelegate,
        IPipelineTenantRepositoryDelegate> step)
    {
        _steps.Add(step);
        _built = null;
    }

    protected IPipelineTenantRepositoryDelegate Build()
    {
        // We start building pipeline with last step
        IPipelineTenantRepositoryDelegate app = () =>
        {
            // If we reach the end of the pipeline then no Tenant was found
            throw new TenantNotFoundException();

            // A non-throwing alternative would be:
            //return Task.FromResult(default(TTenant)!);
        };

        // Then we go back passing the next step to each previous step
        for (var c = _steps.Count - 1; c >= 0; c--)
            app = _steps[c](app);

        _built = app;
        return app;
    }

    public async Task<TTenant?> GetTenantAsync() => await (_built ?? Build()).Invoke();
}

Sample test:

var pipelineResolver = new PipelineTenantResolver<AppTenant>();

// First step
Task<AppTenant> resolveTenant = Task.FromResult(new AppTenant());
pipelineResolver.Use((next) => {
    if (shouldShortCircuit)
        return () => resolveTenant;
    return next;
});
// next steps...
//pipelineResolver.Use(next => next); // etc..
// Last step (defined in Build()) will just throw

var tenant = await pipelineResolver.GetTenantAsync();

Finally, in my services registration I register different implementations for ITenantResolver depending on the entry-point (web app, web api, unit tests, etc), and Task<AppTenant> is registered with a factory that depends on ITenantResolver.ResolveAsync():

builder.Services.AddTransient<Task<AppTenant>>(sp => sp.GetRequiredService<ITenantAccessor>().ResolveAsync());

PS: all code above assumes that resolving a tenant is an async call, that’s why I’ve registered a Task<> (if the code wasn’t async it would be even easier). If you want to inject directly the resolved AppTenant (instead of a Task<AppTenant>) that would be a blocking call (not recommended in constructors) and mixing sync with async (not recommended). In Autofac that’s possible (registering a factory that uses async code), I’m not sure if that’s also possible in Microsoft.Extensions.DependencyInjection - but anyway - blocking calls during injection/constructors is not recommended so it’s best to inject Task<AppTenant> instead.

It’s been a while since I don’t write anything, so I hope you enjoyed this article.

A few days ago my wife just deleted by mistake some old videos from our daughter (when she was like saying her first words) which were only stored in her WhatsApp. I don’t even have to mention how those videos were important to us (she wasn’t following my advice about how to backup her media) - so I had to help recover the videos.

Since WhatsApp stores backups in Google Drive I thought I could just uninstall and reinstall and it would get the backups. That WOULD be easy, but her WhatsApp was still using an old phone number from Brazil which wasn’t even here with us in America (yeah, I know - she just didn’t want to change the number that all her family had, etc). Since we wouldn’t be able to reactivate WhatsApp using that old number the first thing I made was updating her number, to allow me to restore from the Google Drive backup of previous week.

After we updated her WhatsApp number, I disabled backups or else I would be overwriting the backup of the previous week and would lose the videos she was looking for.

So after uninstalling and reinstalling the app, I’ve learned the hard way that WhatsApp backups (both the messages and the media) which were taken using one phone number cannot be restored to a new number. So basically I lost not only all her messages but also all her media.

So then I found this great tool which allows us to Extract (download) the full WhatsApp backup from Google Drive: https://github.com/YuriCosta/WhatsApp-GD-Extractor-Multithread.
Basically you get your device id, then you fill it together with your Google username/password, and the script will download all pictures/videos and even the encrypted backup of the chats (msgstore.db.crypt14).

Then to restore her chat history back we had to change her number again back to the old number in Brazil, and ask her family in Brazil to get the activation code for us, so we could restore from the Google Drive Backup.

In the end of the day, I didn’t even had to dump the Google Drive backup, but at least I learned something new that can be useful.

And also, while trying to recover her messages, I learned about this other whapa toolkit which contains tools to encrypt or decrypt WhatsApp datastore (requires private key), merge two data stores (merge chat histories from different phones, also requires private key), as well as extracting media from Google Drive (I haven’t tested) or iCloud. If you’re trying to decrypt new format (15) you may need this other tool.

And last: if you need to extract the private key of your WhatsApp and your Android is not rooted, you can use this tool - https://github.com/YuvrajRaghuvanshiS/WhatsApp-Key-Database-Extractor (I had to use python wa_kdbe.py --allow-reboot) which basically will install an old build of WhatsApp and exploit it to dump the private key using adb.

Fun learning anyway.