Thursday, August 27, 2015

Simplistic JavaScript dependency injection with ES6 destructuring

Recently I got a bit tired with Angular's quirks and intricacies. To freshen up, I'm playing with framework-less JavaScript (Vanilla JS). I'm also getting more and more used to ES6 features. One of the outcomes by now is the idea for the Dependency Injection approach that stays simplistic, decoupled from any framework and still convenient to consume.

Destructuring

One of the features I like most in ES6 is destructuring. It introduces a convenient syntax for getting multiple values from arrays or objects in a single step, i.e. do the following:

let [lat, lng] = [54.4049, 18.5763];
console.log(lat); // 54.4049
console.log(lng); // 18.5763

or like this:

let source = { first: 1, second: 2 };
let { first, second } = source;
console.log(first, second); // 1, 2

What is even nicer, it works fine in a function definition, too, making it a great replacement for the config object pattern, where instead of providing the large number of parameters, some of them potentially optional, we provide a single plain configuration object and read all the relevant options from the inside the object provided. So, with ES6 destructuring (+default parameters support), instead of this:

function configurable(config) {
    var option1 = config.option1 || 123;
    var option2 = config.option2 || 'abc';
    // the actual code starts here...
}

we can move all that read-config-and-apply-default-if-needed stuff directly as a parameter:

function configurable({ option1 = 123, option2 = 'abc' }) {
    // the actual code from the very beginning...
}

The code is equivalent and the change doesn't require any changes at the caller side.

Injecting

We can use destructuring to provide Angular-like experience for receiving the dependencies by a class or a function that is even more cruft-free as it's minification-safe and thus doesn't require tricks like ngAnnotate does.

Here is how it can look from the dependencies consumer side:

function iHaveDependencies({ dependency1, dependency2 }) {
    // use dependency1 & dependency2
}

Whenever we invoke the iHaveDependencies function, we need to pass it a single parameter containing the object with dependency1 and dependency2 keys, but possibly also with others. Nothing prevents us from passing the object with all the possible dependencies there (a container).

So the last thing is to ensure we have one available whenever we create the objects (or invoke the functions):

// possibly create it once and keep it for a long time
let container = { 
  dependency1: createDependency1(),
  dependency2: createDependency2(),
  dependency3: createDependency3(),
  otherDependency: createOtherDependency() 
};

// use our "container" to resolve dependencies
iHaveDependencies(container);

That's all. The destructuring mechanism will take care of populating dependency1 and dependency2 variables within our function seamlessly.

We can easily build a dependency injection "framework" on top of that. The implementation would vary depending on how our application creates and accesses the objects, but the general idea will hold true. Isn't that neat?

PS. Because the lack of direct ES6 support in the browsers, running that in a browser right now requires transpiling it down to ES5 beforehand. Babel works great for that.

This post was originally posted on my company blog.

Wednesday, May 27, 2015

iOS layouts for web developers

I'm quite ashamed I'm still failing to write something on a regular basis here. But it's not that I'm not writing anything. I've just concluded my iOS layouts for web developers series on Bright Inventions blog.

Almost all of my past development experience is centered around the web. Just recently I had an opportunity to dive into an iOS development and while I enjoy it, I miss a lot of things from the web development world. I've quickly discovered that applying the techniques and approaches directly from the web is often not possible. Sometimes I had to switch to the different mindset than the one I'm used to. To make things easier, I was looking for an iOS begginer guide targeted specifically to the web developers like me, but I haven't found any. This is how the idea for this series of blog posts was born.

Have a look!

Wednesday, November 5, 2014

Attaching ShareJS to <select> element

One thing that I found missing in ShareJS library was the possibility to attach live concurrent editing to HTML <select> element. Out of the box it works only with text fields - <input> and <textarea> using doc.attachTextarea(elem) function.

Working around that deficiency wasn't so trivial. ShareJS works with Operational Transformations that extracts each logical change to the text (addition or removal) and sends only the change information over the wire. It is great for textual elements, but for <select>, whose value is replaced always in one shot, it makes a little sense.

Unfortunately, there is no "replace" operation we could use on <select> value change - the modus operandi we have to live with is constrained to insertions and removals. It means we have to mimic "replace" operation with removal and insertion. The problem with this approach is that when the operations get reversed - so that the client receives new value insertion first and then removal of the previous value - the intermittent value in-between is not a valid <option>. It is a concatenation of old value and new value. DOM API doesn't like that and rejects that change, setting the <select> value to empty one. The removal operation that comes next is then unable to fix the value as it tries to remove something from already empty string in DOM.

I have worked around that wrapping my DOM element with a tiny wrapper that keeps the raw value and exposes it for ShareJS transformations while still trying to update the original element's DOM:

var rawValue = innerElem.value;
var elem = {
    get value () {
        return rawValue;
    },
    set value (v) {
        rawValue = v;
        innerElem.value = v;
    }
};

ShareJS also doesn't attach itself to change event, typical for <select> element - it specializes in keyboard events. So I have to attach on my own and rely the event to the underlying ShareJS implementation, faking the event of type that is handled by the library - I've chosen the mysterious textInput event.

Here is the full code as Gist: ShareJS attachSelect. It adds a new function to the Doc prototype, allowing calling it in the same way we're calling ShareJS native attachTextarea:

if (elem.tagName.toLowerCase() === 'select') {
    doc.attachSelect(elem);
} else {
    doc.attachTextarea(elem);
}

Feel free to use the code, I hope someone finds that useful.

This post was originally posted on my company blog.

Thursday, October 30, 2014

ShareJS 0.7.3 working example

I’m experimenting with ShareJS library, which is intended to allow live concurrent editing like in Google Docs. The demo on their website seems incredibly easy, even though later on the page they are so cruel: “ShareJS is mostly working, but it’s still a bit shit.”. I wouldn’t be so harsh as I was able to have it up and running in less than few hours. But the fact is it wasn’t as easy as it seemed.

It looks like the main problem with current state of ShareJS is what is pretty common in wild and uncontrolled open source world - lack of proper documentation. Here the problem is even worse. There are some docs and examples, but most of it is either incomplete or outdated. ShareJS.org website runs on ShareJS 0.5, while the most recent release is 0.7.3, with no backward compatibility between those releases. I think it will be less harmful if there was no examples at all - right now they are more misleading than helpful. It was a bit frustrating when even the shortest and simplest snippet from their website didn’t work, failing on non-existing functions being called.

Anyway, I was able to figure out what I need to change to have the simple demo running, both server- and client-side. Here it is, in case you have the same struggle, too.

On server-side, I’m running CoffeeScript WebSocket server, almost like in the original sample. I just needed few changes in order to have it running with Connect 3 - logging and static serving middlewares are no longer included in Connect out of the box, so I used morgan and serve-static, respectively. Here is the only changed part around Connect middlewares initialization:

app = connect()
app.use morgan()
app.use '/srv', serveStatic sharejs.scriptsDir
app.use serveStatic "#{__dirname}/app”

Go here for full Gist: ShareJS 0.7.3 server-side code.

I’m exposing client JavaScript libraries provided with ShareJS under /srv path and the client-facing web application files, physically located in /app on my filesystem, are exposed directly in the root path.

Client-side was a bit harder. Running the original code from the main ShareJS.org website wasn’t successful.

sharejs.open('blag', 'text', function(error, doc) {
  var elem = document.getElementById('pad');
  doc.attach_textarea(elem);
});

It tries to call sharejs.open function, which yields “TypeError: undefined is not a function” error for a simple reason - there is no longer “open” function on sharejs global variable. Fiddling around, I found an example that is using more verbose call like this:

var ws = new WebSocket('ws://127.0.0.1:7007');
var share = new sharejs.Connection(ws);
var doc = share.get('blag', 'doc');

if (!doc.type) {
    doc.create('text');
}
       
doc.whenReady(function () {
    var elem = document.getElementById('pad');
    doc.attachTextarea(elem);
});

Seemed legitimate and didn’t fail immediately, but I was getting "Operation was rejected (Document already exists). Trying to rollback change locally.” error message anytime except the first time. The code was calling doc.create('text') every time and that was clearly wrong, I should get doc.type pre-populated somehow. The solution is to subscribe to the document first and move checking the type and creating when needed to the function called after the document is ready - like this:

var ws = new WebSocket('ws://127.0.0.1:7007');
var share = new sharejs.Connection(ws);
var doc = share.get('blag', 'doc');
doc.subscribe();

doc.whenReady(function () {
    if (!doc.type) {
        doc.create('text');
    }

    var elem = document.getElementById('pad');
    doc.attachTextarea(elem);
});

See the full Gist: ShareJS 0.7.3 client-side code.

This post is cross-posted with my company blog.

Thursday, July 10, 2014

The Switch

A little more personal than usual today. With the end of June I left a job on a long-term 40+ developers project in a large international company. I didn't feel underpaid, my job was not mundane nor exhausting. But I felt it's high time to move on.

I'm now working with the company that employs around a thousand times less people and focuses on projects that are much smaller in scope and much shorter in time than in my previous job. I can see goods and bads of that change. I have more opportunities to try out and learn new things and I can stay away from office politics that I hate, but it comes in lieu of probably reduced job stability and the lack of long-term projects maintenance challenges that I really enjoy.

The biggest motivation I had was to test myself against my own belief that the software engineering is much more the mindset than the pure knowledge. I've always wanted to be open to another programming languages and technology stacks than the one I was currently working. I think the language or the platform are just a tools and a good software developer should not be constrained to use only one particular toolset. I believe switching from one to another should be just a matter of getting familiar with the practices and conventions of a given platform plus some practice. And actually the experience gained on other platforms can be used to get the best of both worlds.

So here I am - I no longer consider myself a database-inclined .NET guy. Now I'll be working in a variety of technologies, centered around mobile and front-end web development - from iOS to Android, from Node.js to AngularJS etc. The key is I have virtually no experience in any of those stacks. Right now I feel a bit like a toddler - I'm learning hundreds of new basic things every day. And that's the fun!

Wednesday, June 18, 2014

StructureMap: hot swap cache

In the previous post I've shown how to cache the objects in StructureMap for a given period of time. As I mentioned in that post, there is one possibly serious downside of the approach presented - the penalty of cache rebuilding that kicks one unlucky user every caching period. If it takes more than several seconds for the cached object to be built, we probably don't want this to happen in-process, unless we're showing our users something like XKCD strips while waiting.

Ideally, we would be rebuilding our cache in some kind of off-process mechanism and when it's ready, just replacing the old cache object with the fresh one - like disk hot swapping. Is it also possible with StructureMap? Probably not with lifecycles - lifecycles does not control object creation, they just provide proper cache.

What we can do instead is to pre-build the cache object and inject it into the working container. But we can't use the container to prepare that cache object for us this time - the container will happily fulfill our request with the previously cached object. Although delegating the object creation process is actually one of the purposes we use IoC containers for, I can't see any neat way to delegate the responsibility for objects creation for the whole application lifetime except the cache pre-building.

So I've chosen the less neat way. I've created a cache factory that just news the cache up manually, while being itself created by StructureMap. That way, whenever the application asks for IDependency, it gets the cached instance quickly. But when the cache rebuilding task runs, it grabs DependencyFactory and creates a new object, a future cache.

Let's see the code. First, here is a base class for all the cache factories - CacheFactory. It smells like a conforming container a bit, but I find it not really harmful. It is not intended to be used in any context other than cache pre-building and it is specialized to create a single type of objects. Cache consumers should not know about it and just take ICache dependency through the constructor injection or any other legitimate way.

public abstract class CacheFactory
{
    public abstract object InternalBuild();
    public abstract Type PluginType { get; }
}

public abstract class CacheFactory<T> : CacheFactory
{
    public T Build()
    {
        return (T)InternalBuild();
    }

    public override Type PluginType
    {
        get { return typeof(T); }
    }
}

The non-generic class is the core here. It defines a method responsible for returning the actual cache instance. The generic class is just to keep the API nice and have the possibility to define strongly-typed constraints.

The second brick in the puzzle is the code that handles the actual cache hot swap. It spawns a new thread that wakes up every 600 seconds and traverses all the CacheFactories registered in the container, creating new cache instances and injecting it into the working container. This way up until the Inject call, StructureMap serves all the requests with the previously cached instance and the Inject call gets the new object, ready to be used without any further delays.

public class BackgroundCacheRefresher
{
    private readonly IContainer _container;
    private readonly ILog _log;

    public BackgroundCacheRefresher(IContainer container, ILog log)
    {
        _container = container;
        _log = log;
    }

    private class Worker
    {
        private readonly IContainer _container;
        private readonly IEnumerable<CacheFactory> _cacheFactories;
        private readonly ILog _log;

        public Worker(IContainer container, IEnumerable<CacheFactory> cacheFactories, ILog log)
        {
            _container = container;
            _cacheFactories = cacheFactories;
            _log = log;
        }

        public void RefreshAll()
        {
            foreach (var cacheFactory in _cacheFactories)
            {
                try
                {
                    _container.Inject(cacheFactory.PluginType, cacheFactory.InternalBuild());
                    _log.InfoFormat("Replaced instance of '{0}'.", cacheFactory.PluginType.Name);
                }
                catch (Exception e)
                {
                    _log.Error(String.Format("Failed to replace instance of '{0}' due to exception,"
                        + " will continue to use previously cached instance.", 
                        cacheFactory.PluginType.Name), e);
                }
            }
        }
    }

    private void RunLoop()
    {
        while (true)
        {
            var lifetime = 600; // seconds
            _log.InfoFormat("Will now go to sleep for {0} s.", lifetime);
            Thread.Sleep(TimeSpan.FromSeconds(lifetime));

            _log.Info("Woke up, starting refresh cycle.");
            _container.GetInstance<Worker>().RefreshAll();
        }
    }

    public void Execute()
    {
        new Thread(RunLoop).Start();
    }
}

I'm creating BackgroundCacheRefresher and calling its Execute method at the application startup. It starts with sleeping - the first cache is build "traditionally", as registered below.

Now we just need to wire things up in the Registry. I've created an extension method for the cache registration to make it clean and encapsulated. It registers both the cache object (as a singleton, to keep it in memory, but we'll replace it periodically with the code above) and its corresponding CacheFactory implementation.

public static class RegistryExtensions
{
    public static CacheBuilderDSL<T> UseHotSwapCache<T>(this CreatePluginFamilyExpression<T> expression)
    {
        return new CacheBuilderDSL<T>(expression);
    }

    public class CacheBuilderDSL<T>
    {
        private readonly CreatePluginFamilyExpression<T> _expression;

        public CacheBuilderDSL(CreatePluginFamilyExpression<T> expression)
        {
            _expression = expression;
        }

        public SmartInstance<TConcrete, T> With<TConcrete, TFactory>(Registry registry)
            where TConcrete : T
            where TFactory : CacheFactory<T>
        {
            registry.For<CacheFactory>().Use<TFactory>();
            return _expression.Singleton().Use<TConcrete>();
        }
    }
}

And here is how to use it:

For<IDependency>().UseHotSwapCache().With<ExpensiveDependency, ExpensiveDependencyFactory>(this);

The last thing is the factory - just newing up the cache object. Note that its dependencies can be provided in the typical, constructor-injected way.

public class ExpensiveDependencyFactory : CacheFactory<IDependency>
{
    private readonly IDependencyDependency _loader;

    public ExpensiveDependencyFactory(IDependencyDependency otherDependency)
    {
        _otherDependency = otherDependency;
    }

    public override object InternalBuild()
    {
        return new ExpensiveDependency(_otherDependency);
    }
}

Whoa, a bit of code here. Maybe there is something simpler available - if so, drop me a line, please! Otherwise, feel free to use it.

Tuesday, June 17, 2014

StructureMap: time expiring objects cache

StructureMap is my favorite .NET's IoC container. It has a very nice API and is quite well extensible. One of the things I use its extensibility points for is to have my expensive objects cached for some time. Not a singleton, as the cached values are changing from time to time and I want to see those changes eventually. Also not a transient nor per-request instance, as filling the cache is expensive - let's say it's a web service call that takes several seconds to complete. There is no such object lifecycle provided by StructureMap. Let's fix it!

What I need is a custom lifecycle object, so that I can configure my dependencies almost as usual - instead of for example:

For<IDependency>().HybridHttpOrThreadLocalScoped()
    .Use<NotSoExpensiveDependency>();

I'll use my own lifecycle using more generic LifecycleIs DSL method:

For<IDependency>().LifecycleIs(new TimeExpiringLifecycle(secondsToExpire: 600))
    .Use<DependencyFromWebService>();

LifecycleIs expects me to pass ILifecycle implementation in. That interface is responsible for keeping a cache for the objects. Its responsibility is to decide where that cache is and how long does it live. In our case, all we need to do is to use "singleton-like" cache (MainObjectCache) and make sure it is invalidated after a given period of time. Easy as that!

This is how it looks like for StructureMap 2.6 family:

public class TimeExpiringLifecycle : ILifecycle
{
    private readonly long _secondsToExpire;
    private readonly IObjectCache _cache = new MainObjectCache();

    private DateTime _lastExpired;

    public TimeExpiringLifecycle(long secondsToExpire)
    {
        _secondsToExpire = secondsToExpire;
        Expire();
    }

    private void Expire()
    {
        _lastExpired = DateTime.Now;
        _cache.DisposeAndClear();
    }

    public void EjectAll()
    {
        _cache.DisposeAndClear();
    }

    public IObjectCache FindCache()
    {
        if (DateTime.Now.AddSeconds(-_secondsToExpire) >= _lastExpired)
            Expire();

        return _cache;
    }

    public string Scope
    {
        get { return GetType().Name; }
    }
}

And here is the same for StructureMap 3.0 (there were some breaking names changes etc.)

>public class TimeExpiringLifecycle : ILifecycle
{
    private readonly long _secondsToExpire;
    private readonly IObjectCache _cache = new LifecycleObjectCache();

    private DateTime _lastExpired;

    public TimeExpiringLifecycle(long secondsToExpire)
    {
        _secondsToExpire = secondsToExpire;
        _cache.DisposeAndClear();
    }

    private void Expire()
    {
        _lastExpired = DateTime.Now;
        _cache.DisposeAndClear();
    }

    public void EjectAll(ILifecycleContext context)
    {
        _cache.DisposeAndClear();
    }

    public IObjectCache FindCache(ILifecycleContext context)
    {
        if (DateTime.Now.AddSeconds(-_secondsToExpire) >= _lastExpired)
            Expire();

        return _cache;
    }

    public string Description
    {
        get 
        {
            return "Lifecycle for StructureMap that keeps the objects for the period of given seconds."; 
        }
    }
}

StructureMap is responsible for reading and writing the cache, constructing the objects etc. - we don't need to care about that stuff at all. The only thing we should remember is that although all the requests within 600 seconds will be served with the cached object, after that time one of the requests will finally encounter a cache miss and will need to create that expensive cache, bearing the cost within that request.