tag:blogger.com,1999:blog-60380342646926306282024-03-13T00:28:15.420+01:00NOtherDevAdam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.comBlogger87125tag:blogger.com,1999:blog-6038034264692630628.post-25624822026915421002016-02-13T21:06:00.000+01:002016-02-13T21:06:41.955+01:00Move along, nothing to see here<p>OK, that does it. It’s the high time to do something about this blog. I’ve let it rot here for too long, building myself a feeling of guilt for not writing regularly. That kind of online presence is not really a valuable asset at the moment, especially for someone aspiring to call himself a web developer, and that how I’d like to call myself today. The blog looks like and works like it’s 2010. But the bigger problem is that there’s nothing new to read here.</p>
<p>I was thinking about starting over, with the fresh look and feel and the new motivation to write. But actually it’s not the lack of motivation that made things like this. I just have to admit, there’s no way for me to keep the blog alive having two small children and a full time job while also trying to sleep enough and get at least a bit of free time. So the only thing I can do with this blog is to call it finished and go forward.</p>
<p>Thank you all, dear readers, for being here. During these over 4 years, there was more than 100k of you here. Most of you came here to read my 2011/12 series of posts about NHibernate’s then new mapping features - the <a href="/2012/02/nhibernates-mapping-by-code-summary.html">summary post</a> was read almost 50k times. These posts gave me a lot of satisfaction and tons of positive feedback. And it’s still the only comprehensive content in the web about that NHibernate features. But that’s probably because <a href="/2012/09/is-nhibernate-dead.html">NHibernate is dead</a>, as I wrote half a year later in a bit clickbaity-titled article that was also quite popular. The article that made the record number of visits during a single day was <a href="/2014/04/fundamentals-still-holds-true.html">Computer Science fundamentals still hold true</a> from 2014 - it generated 5093 sessions a single day.</p>
<p>The blog will not be updated anymore, but it will stay available. Closing it doesn’t mean I’m not going to be present in the web. I’ll still occasionally write posts at <a href="http://blog.brightinventions.pl">Bright’s blog</a>. And I’m going to start a smaller form of blogging now. Starting yesterday, I’m going to tweet about the things I learned or find out each workday. So <b>stay tuned and <a href="https://twitter.com/NOtherDev">follow me on Twitter</a></b>!</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com2tag:blogger.com,1999:blog-6038034264692630628.post-88039023976069201202015-11-17T21:45:00.000+01:002015-11-18T13:24:25.279+01:00What Web Can Do Today<p>My evergrowing enthusiasm for the Web motivated me to spend few evenings working on a little website that I'm proudly presenting today - <b><a href="https://whatwebcando.today/">What Web Can Do Today</a></b>. </p>
<p>It's a quick overview of the modern Web technologies centered on the integration with the devices. These technologies, based purely on the Web stack, are already available or soon-to-be available in the major browsers. The Web Platform can now offer surprisingly lot and there's a wide spectrum of use cases that are solvable with HTML5 APIs without relying on the closed, vendor-locked, non-interoperable, native mobile technologies.</p>
<p>Enjoy!</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-77284316126635640362015-08-27T08:05:00.001+02:002015-08-27T08:05:51.052+02:00Simplistic JavaScript dependency injection with ES6 destructuring<p>Recently I got a bit tired with Angular's quirks and intricacies. To freshen up, I'm playing with framework-less JavaScript (<a href="http://vanilla-js.com/">Vanilla JS</a>). I'm also getting more and more used to ES6 features. One of the outcomes by now is the idea for the <a href="http://www.martinfowler.com/articles/injection.html">Dependency Injection</a> approach that stays simplistic, decoupled from any framework and still convenient to consume.</p>
<h3>Destructuring</h3>
<p>One of the features I like most in ES6 is <a href="http://www.2ality.com/2015/01/es6-destructuring.html">destructuring</a>. It introduces a convenient syntax for getting multiple values from arrays or objects in a single step, i.e. do the following:</p>
<pre class="brush: javascript">let [lat, lng] = [54.4049, 18.5763];
console.log(lat); // 54.4049
console.log(lng); // 18.5763</pre>
<p>or like this:</p>
<pre class="brush: javascript">let source = { first: 1, second: 2 };
let { first, second } = source;
console.log(first, second); // 1, 2</pre>
<p>What is even nicer, it works fine in a function definition, too, making it a great replacement for the <a href="http://christianheilmann.com/2008/05/23/script-configuration/">config object pattern</a>, where instead of providing the large number of parameters, some of them potentially optional, we provide a single plain configuration object and read all the relevant options from the inside the object provided. So, with ES6 destructuring (+default parameters support), instead of this:</p>
<pre class="brush: javascript">function configurable(config) {
var option1 = config.option1 || 123;
var option2 = config.option2 || 'abc';
// the actual code starts here...
}</pre>
<p>we can move all that read-config-and-apply-default-if-needed stuff directly as a parameter:</p>
<pre class="brush: javascript">function configurable({ option1 = 123, option2 = 'abc' }) {
// the actual code from the very beginning...
}</pre>
<p>The code is equivalent and the change doesn't require any changes at the caller side.</p>
<h3>Injecting</h3>
<p>We can use destructuring to provide Angular-like experience for receiving the dependencies by a class or a function that is even more cruft-free as it's minification-safe and thus doesn't require tricks like <a href="https://github.com/olov/ng-annotate">ngAnnotate</a> does.</p>
<p>Here is how it can look from the dependencies consumer side:</p>
<pre class="brush: javascript">function iHaveDependencies({ dependency1, dependency2 }) {
// use dependency1 & dependency2
}</pre>
<p>Whenever we invoke the <tt>iHaveDependencies</tt> function, we need to pass it a single parameter containing the object with <tt>dependency1</tt> and <tt>dependency2</tt> keys, but possibly also with others. Nothing prevents us from passing the object with all the possible dependencies there (a container).</p>
<p>So the last thing is to ensure we have one available whenever we create the objects (or invoke the functions):</p>
<pre class="brush: javascript">// possibly create it once and keep it for a long time
let container = {
dependency1: createDependency1(),
dependency2: createDependency2(),
dependency3: createDependency3(),
otherDependency: createOtherDependency()
};
// use our "container" to resolve dependencies
iHaveDependencies(container);</pre>
<p>That's all. The destructuring mechanism will take care of populating <tt>dependency1</tt> and <tt>dependency2</tt> variables within our function seamlessly.</p>
<p>We can easily build a dependency injection "framework" on top of that. The implementation would vary depending on how our application creates and accesses the objects, but the general idea will hold true. Isn't that neat?</p>
<p>PS. Because the lack of direct ES6 support in the browsers, running that in a browser right now requires transpiling it down to ES5 beforehand. <a href="https://babeljs.io/">Babel</a> works great for that.</p>
<p><i>This post was originally posted on <a href="http://blog.brightinventions.pl/simplistic-javascript-dependency-injection-es6-destructuring/">my company blog</a>.</i></p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-63078106754331246572015-05-27T08:58:00.000+02:002015-05-27T08:58:53.271+02:00iOS layouts for web developers<p>I'm quite ashamed I'm still failing to write something on a regular basis here. But it's not that I'm not writing anything. I've just concluded my <a href="http://blog.brightinventions.pl/ios-layouts-for-web-developers/"><b>iOS layouts for web developers</b> series</a> on <a href="http://blog.brightinventions.pl/">Bright Inventions blog</a>. </p>
<p>Almost all of my past development experience is centered around the web. Just recently I had an opportunity to dive into an iOS development and while I enjoy it, I miss a lot of things from the web development world. I've quickly discovered that applying the techniques and approaches directly from the web is often not possible. Sometimes I had to switch to the different mindset than the one I'm used to. To make things easier, I was looking for an iOS begginer guide targeted specifically to the web developers like me, but I haven't found any. This is how the idea for this series of blog posts was born.</p>
<p><a href="http://blog.brightinventions.pl/ios-layouts-for-web-developers/">Have a look!</a></p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-37756430314335841092014-11-05T10:22:00.003+01:002015-08-27T08:06:58.359+02:00Attaching ShareJS to <select> element<p>One thing that I found missing in <a href="http://sharejs.org/">ShareJS</a> library was the possibility to attach live concurrent editing to HTML <tt><select></tt> element. Out of the box it works only with text fields - <tt><input></tt> and <tt><textarea></tt> using <tt>doc.attachTextarea(elem)</tt> function.</p>
<p>Working around that deficiency wasn't so trivial. ShareJS works with <a href="http://en.wikipedia.org/wiki/Operational_transformation">Operational Transformations</a> that extracts each logical change to the text (addition or removal) and sends only the change information over the wire. It is great for textual elements, but for <tt><select></tt>, whose value is replaced always in one shot, it makes a little sense.</p>
<p>Unfortunately, there is no "replace" operation we could use on <tt><select></tt> value change - the <i>modus operandi</i> we have to live with is constrained to insertions and removals. It means we have to mimic "replace" operation with removal and insertion. The problem with this approach is that when the operations get reversed - so that the client receives new value insertion first and then removal of the previous value - the intermittent value in-between is not a valid <tt><option></tt>. It is a concatenation of old value and new value. DOM API doesn't like that and rejects that change, setting the <tt><select></tt> value to empty one. The removal operation that comes next is then unable to fix the value as it tries to remove something from already empty string in DOM.</p>
<p>I have worked around that wrapping my DOM element with a tiny wrapper that keeps the raw value and exposes it for ShareJS transformations while still trying to update the original element's DOM:</p>
<pre class="brush: javascript">var rawValue = innerElem.value;
var elem = {
get value () {
return rawValue;
},
set value (v) {
rawValue = v;
innerElem.value = v;
}
};</pre>
<p>ShareJS also doesn't attach itself to <tt>change</tt> event, typical for <tt><select></tt> element - it specializes in keyboard events. So I have to attach on my own and rely the event to the underlying ShareJS implementation, faking the event of type that is handled by the library - I've chosen the mysterious <tt>textInput</tt> event.</p>
<p>Here is the full code as Gist: <a href="https://gist.github.com/NOtherDev/9e713cfd68d6da9a174a">ShareJS attachSelect</a>. It adds a new function to the <tt>Doc</tt> prototype, allowing calling it in the same way we're calling ShareJS native <tt>attachTextarea</tt>:</p>
<pre class="brush: csharp">if (elem.tagName.toLowerCase() === 'select') {
doc.attachSelect(elem);
} else {
doc.attachTextarea(elem);
}</pre>
<p>Feel free to use the code, I hope someone finds that useful.</p>
<p><i>This post was originally posted on <a href="http://blog.brightinventions.pl/attaching-sharejs-to-select/">my company blog</a>.</i></p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-9677277816960558332014-10-30T09:22:00.001+01:002014-10-31T11:33:52.001+01:00ShareJS 0.7.3 working example<p>I’m experimenting with <a href="http://sharejs.org/">ShareJS</a> library, which is intended to allow live concurrent editing like in Google Docs. The demo on their website seems incredibly easy, even though later on the page they are so cruel: “<i>ShareJS is mostly working, but it’s still a bit shit.</i>”. I wouldn’t be so harsh as I was able to have it up and running in less than few hours. But the fact is it wasn’t as easy as it seemed.</p>
<p>It looks like the main problem with current state of ShareJS is what is pretty common in wild and uncontrolled open source world - lack of proper documentation. Here the problem is even worse. There are <a href="https://github.com/share/ShareJS/wiki">some docs</a> and <a href="http://sharejs.org/demos.html">examples</a>, but most of it is either incomplete or outdated. ShareJS.org website runs on ShareJS 0.5, while the most recent release is 0.7.3, with no backward compatibility between those releases. I think it will be less harmful if there was no examples at all - right now they are more misleading than helpful. It was a bit frustrating when even the shortest and simplest snippet from their website didn’t work, failing on non-existing functions being called.</p>
<p>Anyway, I was able to figure out what I need to change to have the simple demo running, both server- and client-side. Here it is, in case you have the same struggle, too.</p>
<p>On <b>server-side</b>, I’m running CoffeeScript WebSocket server, almost like in <a href="https://github.com/share/ShareJS/blob/master/examples/ws.coffee">the original sample</a>. I just needed few changes in order to have it running with <a href="https://github.com/senchalabs/connect#readme">Connect 3</a> - logging and static serving middlewares are no longer included in Connect out of the box, so I used <tt><a href="https://github.com/expressjs/morgan">morgan</a></tt> and <tt><a href="https://github.com/expressjs/serve-static">serve-static</a></tt>, respectively. Here is the only changed part around Connect middlewares initialization:</p>
<pre class="brush: plain">app = connect()
app.use morgan()
app.use '/srv', serveStatic sharejs.scriptsDir
app.use serveStatic "#{__dirname}/app”</pre>
<p>Go here for full Gist: <a href="https://gist.github.com/NOtherDev/f288b939d19499060e1b">ShareJS 0.7.3 server-side code</a>.</p>
<p>I’m exposing client JavaScript libraries provided with ShareJS under <tt>/srv</tt> path and the client-facing web application files, physically located in <tt>/app</tt> on my filesystem, are exposed directly in the root path.</p>
<p><b>Client-side</b> was a bit harder. Running the original code from the main ShareJS.org website wasn’t successful.</p>
<pre class="brush: js">sharejs.open('blag', 'text', function(error, doc) {
var elem = document.getElementById('pad');
doc.attach_textarea(elem);
});</pre>
<p>It tries to call <tt>sharejs.open</tt> function, which yields “<tt>TypeError: undefined is not a function</tt>” error for a simple reason - there is no longer “<tt>open</tt>” function on <tt>sharejs</tt> global variable. Fiddling around, I found an example that is using more verbose call like this:</p>
<pre class="brush: js">var ws = new WebSocket('ws://127.0.0.1:7007');
var share = new sharejs.Connection(ws);
var doc = share.get('blag', 'doc');
if (!doc.type) {
doc.create('text');
}
doc.whenReady(function () {
var elem = document.getElementById('pad');
doc.attachTextarea(elem);
});</pre>
<p>Seemed legitimate and didn’t fail immediately, but I was getting "<tt>Operation was rejected (Document already exists). Trying to rollback change locally.</tt>” error message anytime except the first time. The code was calling <tt>doc.create('text')</tt> every time and that was clearly wrong, I should get <tt>doc.type</tt> pre-populated somehow. The solution is to subscribe to the document first and move checking the type and creating when needed to the function called after the document is ready - like this:</p>
<pre class="brush: js">var ws = new WebSocket('ws://127.0.0.1:7007');
var share = new sharejs.Connection(ws);
var doc = share.get('blag', 'doc');
doc.subscribe();
doc.whenReady(function () {
if (!doc.type) {
doc.create('text');
}
var elem = document.getElementById('pad');
doc.attachTextarea(elem);
});</pre>
<p>See the full Gist: <a href="https://gist.github.com/NOtherDev/2ea2bb111c00282e7617">ShareJS 0.7.3 client-side code</a>.</p>
<p><i>This post is cross-posted with <a href="http://blog.brightinventions.pl/sharejs-073-working-example/">my company blog</a></i>.</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-1353807714723683782014-07-10T22:55:00.002+02:002014-07-10T22:57:10.352+02:00The Switch<div class="separator" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/thumb/4/4b/Open_knife_switch.jpg/200px-Open_knife_switch.jpg" /></div><p>A little more personal than usual today. With the end of June I left a job on a long-term 40+ developers project in a large international company. I didn't feel underpaid, my job was not mundane nor exhausting. But I felt it's high time to move on. </p>
<p>I'm now working with the company that employs around a thousand times less people and focuses on projects that are much smaller in scope and much shorter in time than in my previous job. I can see goods and bads of that change. I have more opportunities to try out and learn new things and I can stay away from office politics that I hate, but it comes in lieu of probably reduced job stability and the lack of long-term projects maintenance challenges that I really enjoy.</p>
<p>The biggest motivation I had was to test myself against my own belief that the software engineering is much more the mindset than the pure knowledge. I've always wanted to be open to another programming languages and technology stacks than the one I was currently working. I think the language or the platform are just a tools and a good software developer should not be constrained to use only one particular toolset. I believe switching from one to another should be just a matter of getting familiar with the practices and conventions of a given platform plus some practice. And actually the experience gained on other platforms can be used to get the best of both worlds.</p>
<p>So here I am - I no longer consider myself a database-inclined .NET guy. Now I'll be working in a variety of technologies, centered around mobile and front-end web development - from iOS to Android, from Node.js to AngularJS etc. The key is I have virtually no experience in any of those stacks. Right now I feel a bit like a toddler - I'm learning hundreds of new basic things every day. And that's the fun!</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com2tag:blogger.com,1999:blog-6038034264692630628.post-92083852922726628032014-06-18T20:30:00.000+02:002014-06-18T20:30:02.264+02:00StructureMap: hot swap cache<p>In the previous post I've shown how to <a href="/2014/06/structuremap-time-expiring-objects-cache.html">cache the objects in StructureMap for a given period of time</a>. As I mentioned in that post, there is one possibly serious downside of the approach presented - the penalty of cache rebuilding that kicks one unlucky user every caching period. If it takes more than several seconds for the cached object to be built, we probably don't want this to happen in-process, unless we're showing our users something like <a href="http://xkcd.com/">XKCD strips</a> while waiting.</p>
<p>Ideally, we would be rebuilding our cache in some kind of off-process mechanism and when it's ready, just replacing the old cache object with the fresh one - like disk <a href="http://en.wikipedia.org/wiki/Hot_swapping">hot swapping</a>. Is it also possible with StructureMap? Probably not with lifecycles - lifecycles does not control object creation, they just provide proper cache.</p>
<p>What we can do instead is to <b>pre-build the cache object and inject it into the working container</b>. But we can't use the container to prepare that cache object for us this time - the container will happily fulfill our request with the previously cached object. Although <a href="http://simpleprogrammer.com/2010/11/30/basic-to-basics-understanding-ioc-part-2-creation/">delegating the object creation process</a> is actually one of the purposes we use IoC containers for, I can't see any neat way to delegate the responsibility for objects creation for the whole application lifetime except the cache pre-building.</p>
<p>So I've chosen the less neat way. I've created a cache factory that just <tt>new</tt>s the cache up manually, while being itself created by StructureMap. That way, whenever the application asks for <tt>IDependency</tt>, it gets the cached instance quickly. But when the cache rebuilding task runs, it grabs <tt>DependencyFactory</tt> and creates a new object, a future cache.</p>
<p>Let's see the code. First, here is a base class for all the cache factories - <tt>CacheFactory</tt>. It smells like a <a href="http://blog.ploeh.dk/2014/05/19/conforming-container/">conforming container</a> a bit, but I find it not really harmful. It is not intended to be used in any context other than cache pre-building and it is specialized to create a single type of objects. Cache consumers should not know about it and just take <tt>ICache</tt> dependency through the constructor injection or any other legitimate way.</p>
<pre class="brush: csharp">public abstract class CacheFactory
{
public abstract object InternalBuild();
public abstract Type PluginType { get; }
}
public abstract class CacheFactory<T> : CacheFactory
{
public T Build()
{
return (T)InternalBuild();
}
public override Type PluginType
{
get { return typeof(T); }
}
}</pre>
<p>The non-generic class is the core here. It defines a method responsible for returning the actual cache instance. The generic class is just to keep the API nice and have the possibility to define strongly-typed constraints.</p>
<p>The second brick in the puzzle is the code that handles the actual cache hot swap. It spawns a new thread that wakes up every 600 seconds and traverses all the <tt>CacheFactories</tt> registered in the container, creating new cache instances and injecting it into the working container. This way up until the <tt>Inject</tt> call, StructureMap serves all the requests with the previously cached instance and the <tt>Inject</tt> call gets the new object, ready to be used without any further delays.</p>
<pre class="brush: csharp">public class BackgroundCacheRefresher
{
private readonly IContainer _container;
private readonly ILog _log;
public BackgroundCacheRefresher(IContainer container, ILog log)
{
_container = container;
_log = log;
}
private class Worker
{
private readonly IContainer _container;
private readonly IEnumerable<CacheFactory> _cacheFactories;
private readonly ILog _log;
public Worker(IContainer container, IEnumerable<CacheFactory> cacheFactories, ILog log)
{
_container = container;
_cacheFactories = cacheFactories;
_log = log;
}
public void RefreshAll()
{
foreach (var cacheFactory in _cacheFactories)
{
try
{
_container.Inject(cacheFactory.PluginType, cacheFactory.InternalBuild());
_log.InfoFormat("Replaced instance of '{0}'.", cacheFactory.PluginType.Name);
}
catch (Exception e)
{
_log.Error(String.Format("Failed to replace instance of '{0}' due to exception,"
+ " will continue to use previously cached instance.",
cacheFactory.PluginType.Name), e);
}
}
}
}
private void RunLoop()
{
while (true)
{
var lifetime = 600; // seconds
_log.InfoFormat("Will now go to sleep for {0} s.", lifetime);
Thread.Sleep(TimeSpan.FromSeconds(lifetime));
_log.Info("Woke up, starting refresh cycle.");
_container.GetInstance<Worker>().RefreshAll();
}
}
public void Execute()
{
new Thread(RunLoop).Start();
}
}</pre>
<p>I'm creating <tt>BackgroundCacheRefresher</tt> and calling its <tt>Execute</tt> method at the application startup. It starts with sleeping - the first cache is build "traditionally", as registered below.</p>
<p>Now we just need to wire things up in the <tt>Registry</tt>. I've created an extension method for the cache registration to make it clean and encapsulated. It registers both the cache object (as a singleton, to keep it in memory, but we'll replace it periodically with the code above) and its corresponding <tt>CacheFactory</tt> implementation.</p>
<pre class="brush: csharp">public static class RegistryExtensions
{
public static CacheBuilderDSL<T> UseHotSwapCache<T>(this CreatePluginFamilyExpression<T> expression)
{
return new CacheBuilderDSL<T>(expression);
}
public class CacheBuilderDSL<T>
{
private readonly CreatePluginFamilyExpression<T> _expression;
public CacheBuilderDSL(CreatePluginFamilyExpression<T> expression)
{
_expression = expression;
}
public SmartInstance<TConcrete, T> With<TConcrete, TFactory>(Registry registry)
where TConcrete : T
where TFactory : CacheFactory<T>
{
registry.For<CacheFactory>().Use<TFactory>();
return _expression.Singleton().Use<TConcrete>();
}
}
}</pre>
<p>And here is how to use it:</p>
<pre class="brush: csharp">For<IDependency>().UseHotSwapCache().With<ExpensiveDependency, ExpensiveDependencyFactory>(this);</pre>
<p>The last thing is the factory - just <tt>new</tt>ing up the cache object. Note that its dependencies can be provided in the typical, constructor-injected way.</p>
<pre class="brush: csharp">public class ExpensiveDependencyFactory : CacheFactory<IDependency>
{
private readonly IDependencyDependency _loader;
public ExpensiveDependencyFactory(IDependencyDependency otherDependency)
{
_otherDependency = otherDependency;
}
public override object InternalBuild()
{
return new ExpensiveDependency(_otherDependency);
}
}</pre>
<p>Whoa, a bit of code here. Maybe there is something simpler available - if so, drop me a line, please! Otherwise, feel free to use it.</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-80961248555998063972014-06-17T13:43:00.001+02:002014-06-17T13:46:05.045+02:00StructureMap: time expiring objects cache<p><a href="https://github.com/structuremap/structuremap">StructureMap</a> is my favorite .NET's <a href="http://simpleprogrammer.com/2010/11/23/back-to-basics-understanding-ioc/">IoC container</a>. It has a very nice API and is quite well extensible. One of the things I use its extensibility points for is to have my expensive objects cached for some time. Not a singleton, as the cached values are changing from time to time and I want to see those changes eventually. Also not a transient nor per-request instance, as filling the cache is expensive - let's say it's a web service call that takes several seconds to complete. There is no such object lifecycle <a href="http://ignipro.blogspot.com/2012/09/structuremap-scopes-and-life-cycles.html">provided by StructureMap</a>. Let's fix it!</p>
<p>What I need is a custom lifecycle object, so that I can configure my dependencies almost as usual - instead of for example:</p>
<pre class="brush: csharp">For<IDependency>().HybridHttpOrThreadLocalScoped()
.Use<NotSoExpensiveDependency>();</pre>
<p>I'll use my own lifecycle using more generic <tt>LifecycleIs</tt> DSL method:</p>
<pre class="brush: csharp">For<IDependency>().LifecycleIs(new TimeExpiringLifecycle(secondsToExpire: 600))
.Use<DependencyFromWebService>();</pre>
<p><tt>LifecycleIs</tt> expects me to pass <tt>ILifecycle</tt> implementation in. That interface is responsible for keeping a cache for the objects. Its responsibility is to decide where that cache is and how long does it live. In our case, all we need to do is to use "singleton-like" cache (<tt>MainObjectCache</tt>) and make sure it is invalidated after a given period of time. Easy as that!</p>
<p>This is how it looks like for StructureMap 2.6 family:</p>
<pre class="brush: csharp">public class TimeExpiringLifecycle : ILifecycle
{
private readonly long _secondsToExpire;
private readonly IObjectCache _cache = new MainObjectCache();
private DateTime _lastExpired;
public TimeExpiringLifecycle(long secondsToExpire)
{
_secondsToExpire = secondsToExpire;
Expire();
}
private void Expire()
{
_lastExpired = DateTime.Now;
_cache.DisposeAndClear();
}
public void EjectAll()
{
_cache.DisposeAndClear();
}
public IObjectCache FindCache()
{
if (DateTime.Now.AddSeconds(-_secondsToExpire) >= _lastExpired)
Expire();
return _cache;
}
public string Scope
{
get { return GetType().Name; }
}
}</pre>
<p>And here is the same for StructureMap 3.0 (there were some breaking names changes etc.)</p>
<pre class="brush: csharp">>public class TimeExpiringLifecycle : ILifecycle
{
private readonly long _secondsToExpire;
private readonly IObjectCache _cache = new LifecycleObjectCache();
private DateTime _lastExpired;
public TimeExpiringLifecycle(long secondsToExpire)
{
_secondsToExpire = secondsToExpire;
_cache.DisposeAndClear();
}
private void Expire()
{
_lastExpired = DateTime.Now;
_cache.DisposeAndClear();
}
public void EjectAll(ILifecycleContext context)
{
_cache.DisposeAndClear();
}
public IObjectCache FindCache(ILifecycleContext context)
{
if (DateTime.Now.AddSeconds(-_secondsToExpire) >= _lastExpired)
Expire();
return _cache;
}
public string Description
{
get
{
return "Lifecycle for StructureMap that keeps the objects for the period of given seconds.";
}
}
}</pre>
<p>StructureMap is responsible for reading and writing the cache, constructing the objects etc. - we don't need to care about that stuff at all. The only thing we should remember is that although all the requests within 600 seconds will be served with the cached object, after that time one of the requests will finally encounter a cache miss and will need to create that expensive cache, bearing the cost within that request.</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com2tag:blogger.com,1999:blog-6038034264692630628.post-6455768457096398382014-04-18T10:35:00.000+02:002014-04-18T18:24:49.084+02:00Computer Science fundamentals still hold true<p>There are some discussions out there about what software developer should really know nowadays. Arguments are raised that most of contemporary software developer work is no longer a computer science - which meant creating new stuff out of nothing - but is "just" an engineering work - using already built and verified components and approaches, gluing and mixing it together to create stuff out of other stuff. Questions are raised whether deep understanding of algorithms and data structures or other fundamentals of computer science is crucial in putting together existing libraries and frameworks, which is what most of us basically do every day. </p>
<p>The need for a good software developer to hold a Computer Science degree <a href=" http://news.dice.com/2012/12/03/computer-science-degree/">is often questioned</a> and there are multiple advices out there <a href="http://simpledeveloper.com/how-to-be-a-software-developer-without-a-college-degree/">how to be successful in a field without a degree</a>. Indeed, I've never actually implemented sorting or tree search on my own for professional needs, only as an academic exercise. But I think the knowledge I've gained during my CS studies gave me a lot of insight how things work and gives me a sort of confidence in what I do. Without the knowledge about time and memory complexities, I'd be wandering in the darkness.</p>
<p>Right now, working in a self-sufficient Agile team, with strong desire to avoid knowledge and responsibility silos, everyone is encouraged to take on every task needed to reach the goal. Of course there are still (and always be) people more skilled in database stuff and other more skilled in HTML, and that's fine. But with no code ownership it's also fine when more front-end inclined developers do some back-end tasks and conversely.</p>
<p>But unless someone is doing something purely declarative in its nature, like plain HTML or CSS, <b>the code is still code</b>, regardless it's low-level C or JavaScript at the client. That means understanding the mechanics of how things works and <b>knowing the fundamentals of data structures and algorithms is still crucial for all software developers</b>, no matter where in the stack they fit best. Of course not having a degree cannot disqualify anyone from being a good software engineer, but the theoretical gaps <a href="http://www.hackreactor.com/blog/no-cs-degree-programmer-engineer">need to be filled properly</a>.</p>
<p>Recently I've stumbled upon a simple piece of code we already had on production, working well enough so that it haven't brought any attention until the data quantity was small enough. The code goes like this:</p>
<pre style="brush: csharp">foreach (var foo in foos)
{
var matchingBars = bars.Where(x => x.Foo == foo);
foreach (var bar in matchingBars)
{
DoSomethingWith(foo, bar);
}
}</pre>
<p>Simple enough, isn't it? But there is more and more data. When we reached more than 20k foos and more than 20k bars, this is what happened:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-b3eTe96QppE/U1DjV2oWEQI/AAAAAAAAD2k/ARsyEgN613w/s1600/image002.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-b3eTe96QppE/U1DjV2oWEQI/AAAAAAAAD2k/ARsyEgN613w/s1600/image002.png" /></a></div>
<p>That simple piece of code hit us badly with quite an obvious O(n^2) number of comparisons. Foos and bars are both plain lists, finding matching elements requires traversing plain bars collection for each and every foo element. Each comparison is insignificantly small, but doing it 425 million times takes more than a minute!</p>
<p>I've changed the code to use <a href="http://notherdev.blogspot.com/2014/01/lookup-hidden-gem.html">Lookup</a>, which is a basic hashed structure that allows quick access to the elements by the key. The code now looks like this:</p>
<pre style="brush: csharp">var barsLookup = bars.ToLookup(x => x.Foo);
foreach (var foo in foos)
{
foreach (var bar in barsLookup[foo])
{
DoSomethingWith(foo, bar);
}
}</pre>
<p>That simple change replaced >400 millions of comparisons with only 20k needed to build up the lookup + 20k cheap lookup reads. The result? Total execution time fall down to just 115 ms.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-eTXdX-ouPFM/U1DjV5_ls3I/AAAAAAAAD2g/o5MIhsusvFI/s1600/image003.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-eTXdX-ouPFM/U1DjV5_ls3I/AAAAAAAAD2g/o5MIhsusvFI/s1600/image003.png" /></a></div>
<p>That's 538 times faster, just by one simple data structure change.</p>
<p>I've found a great <a href="http://bigocheatsheet.com/">algorithms and data structures complexity cheat sheet</a>. I think one may not call himself a software developer if he doesn't understand what at least the basic stuff in those tables mean.</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com6tag:blogger.com,1999:blog-6038034264692630628.post-12785530795447504402014-04-10T22:35:00.000+02:002014-04-10T22:35:42.006+02:00Waiting screen - doing bad things right - is it even possible?<p>Sometimes in large web applications, there is a <b>necessity to make a client wait, before the server is able to provide any content</b>. There may be some heavy calculations to be performed, caches refreshed etc. In most cases <b>it probably can - and should - be avoided</b> using <a href="http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/">background workers not being a part of the actual web request</a> or some kind of <a href="http://blog.jacobsonhome.com/2013/04/a-javascript-long-running-background.html">asynchronous AJAX calls</a>. Those approaches give the possibility to have either completely undisturbed user experience or at least to reduce the fuss and eliminate the need to have a blocking wait.</p>
<p>But there's a chance the task we are to accomplish cannot be offloaded onto the background thread at the server nor performed asynchronously with AJAX during normal site operation. I had that kind of situation in a project where authorization was handled by an external, awfully enterprisey component, available through the web service which was slow as a snail and completely out of our control. It was not possible to show anything more than the waiting screen on the first log on, as everything of any value had to be authorized first.</p>
<p>In that hopeless situation, we've decided we need a waiting screen, so that users at least see something that lets them know the service is being prepared for them. You know, if it's slow, it's a clear indication there's a lot of value inside and it's worth paying big money for it. In case it loaded quickly, there seems to have no real value (in case my English is not so fluent to express my thoughts accurately - yes, this was meant to be sarcastic).</p>
<p>Anyway, I was <b>looking for a solution for the waiting screen</b>, that ideally holds all of the features listed here in my subjective order of importance.</p>
<ol>
<li>it needs to be available not just as a JS throbber to be put somewhere on the site - <b>it needs to be a response for the initial request to the website</b>;</li>
<li>it needs to <b>draw something on screen while waiting</b> (obvious to say, but not so obvious to implement);</li>
<li>it should be <a href="http://en.wikipedia.org/wiki/URL_redirection#HTTP_status_codes_3xx"><b>HTTP-compliant</b></a> in terms of status codes etc. so that it doesn't confuse any browsers or web crawlers;</li>
<li>it should <a href="http://www.w3.org/QA/Tips/reback"><b>not break "back" button</b></a>;</li>
<li>while waiting, it <b>should indicate "waiting" status</b> in a browser's status bar, to convince the user something is really going on.</li>
</ol>
<p>Serving a simple wait screen with 302 Redirect to the target request that will do the actual work is not an option, as it will fail on requirement no. 2 - the browser will issue a redirected request without rendering our wait screen. But in order to have point 3 and 4 fulfilled, we need 302 Redirect - serving 200 OK without the actual content will harm the protocol and browsers' history badly - it's tightly coupled. So <b>there's a big contradiction here.</b> We can try going the unwanted <a href="http://en.wikipedia.org/wiki/URL_redirection#Refresh_Meta_tag_and_HTTP_refresh_header">HTML Redirect</a> route or use <a href="http://en.wikipedia.org/wiki/URL_redirection#JavaScript_redirects">JavaScript redirection</a> - it satisfies point 2, but it's no better about protocol compliance nor browser history obedience.</p>
<p>Well, I got stuck here. I've asked for help on <a href="http://stackoverflow.com/questions/22922835/displaying-content-of-302-redirect-or-http-compliant-waiting-screen">StackOverflow</a>, but no interest at all. Let me know if I'm missing something.</p>
<p>By now, I've sacrificed protocol compliance and proper "back" button behavior in order to have anything shown on screen - points 1 and 2 are the crucial ones for the general user experience. It's still quite weak experience, but without those, there is no experience at all. I've chosen the JavaScript redirect route called on <tt>window.load</tt> jQuery event (note that <tt>document.ready</tt> event is raised <a href="http://www.codeproject.com/Tips/632672/JavaScripts-document-ready-vs-window-load">when the DOM is ready, but possibly not yet rendered</a> - a bit too early for us to be sure something is already drawn).</p>
<p>I can't see a way to have the waiting screen done right. Well, maybe <b>there is no right way to do bad things?</b></p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-8437579008756150902014-01-27T07:43:00.000+01:002014-01-27T20:23:02.778+01:00Introducing NOtherLookup - even better lookups<p>Recently I've been discussing <a href="/2014/01/lookup-hidden-gem.html">how awesome is the <tt>ILookup</tt> data structure</a> provided by .NET Framework as a part of LINQ library. I've looked through some of its key features, but I've also mentioned <a href="/2014/01/downsides-of-net-lookups.html">its few imperfections</a>, like no way to create it directly and no easy way to do any operations on a Lookup as a whole, promising to take a look how to fix it.</p>
<p>So, how to fix it? By using <b>NOtherLookup library I've just <a href="https://www.nuget.org/packages/NOtherLookup/1.0.0">pushed to NuGet</a></b>. It is a <b>set of extension methods and utility classes designed to make Lookup-based operations as easy and as natural as in plain LINQ-to-objects</b>.</p>
<p>Consider for example merging two Lookup instances into a single one that has the keys from both instances and for each key that is present in both Lookups, the union of sets should be produced. When doing it in plain LINQ, we probably need to convert both Lookups to <tt>IDictionary<TKey, List<TValue>></tt>, then merge corresponding keys and finally produce a new Lookup manually. With NOtherLookup, we can just use LINQ in the same way as we do on <tt>IEnumerable</tt>:</p>
<pre class="brush: csharp">var result = firstLookup.Union(secondLookup)</pre>
<p>In this example, <tt>result</tt> is still typed as <tt>ILookup<TKey, TValue></tt>, with no more conversions needed. And here is what it does:</p>
<pre class="brush: plain">firstLookup:
"a" => 1,2
"b" => 3,4
secondLookup:
"b" => 4,5
"c" => 6,7
result:
"a" => 1,2
"b" => 3,4,5
"c" => 6,7</pre>
<p>NOtherLookup offers support for several operators known from LINQ. There are some operators that combine two Lookups - besides <a href="https://github.com/NOtherDev/NOtherLookup#union---gets-the-unique-values-for-each-key"><tt>Union</tt></a> shown above, there is <a href="https://github.com/NOtherDev/NOtherLookup#concat---concatenates-values-for-each-key"><tt>Concat</tt></a>, <a href="https://github.com/NOtherDev/NOtherLookup#except---gets-the-difference-of-values-set-for-each-key"><tt>Except</tt></a>, <a href="https://github.com/NOtherDev/NOtherLookup#intersect---gets-the-intersection-of-values-set-for-each-key"><tt>Intersect</tt></a>, <a href="https://github.com/NOtherDev/NOtherLookup#join---combines-two-lookups-by-values-in-each-key-using-provided-selector"><tt>Join</tt></a> and <a href="https://github.com/NOtherDev/NOtherLookup#zip---combines-two-lookups-by-pairs-of-values-using-provided-selector-for-each-key"><tt>Zip</tt></a>. There are some that manipulate a single Lookup - <a href="https://github.com/NOtherDev/NOtherLookup#select---runs-a-projection-on-values-for-each-key"><tt>Select</tt></a> and <a href="https://github.com/NOtherDev/NOtherLookup#where---filters-values-for-each-key"><tt>Where</tt></a>. There is also quite universal <a href="https://github.com/NOtherDev/NOtherLookup#oneachkey---runs-arbitrary-linq-query-on-lookup-elements-igroupings"><tt>OnEachKey</tt></a> method that allow running arbitrary LINQ query on each lookup element, maintaining <tt>ILookup</tt> type.</p>
<pre class="brush: csharp">ILookup<int, string> transformed = lookup.OnEachKey(g => g.Select(x => x + g.Key).Reverse());</pre>
<p>Example:</p>
<pre class="brush: plain">lookup:
"a" => 1,2
"b" => 3,4
transformed:
"a" => "2a", "1a"
"b" => "4b", "3b"</pre>
<p>NOtherLookup contains also a few features that make obtaining <tt>ILookup</tt>s easier. The most important one is <a href="https://github.com/NOtherDev/NOtherLookup#creating-ilookup-manually---lookupbuilder"><tt>Lookup.Builder</tt></a> - a class that allows creating lookup from the scratch (as a solution for the lack of <tt>Lookup</tt> public constructor), with support for custom comparators, if needed.</p>
<pre class="brush: csharp">ILookup<int, string> lookup = Lookup.Builder
.WithKey(1, new[] { "a", "b" })
.WithKey(2, new[] { "c", "d" })
.WithComparer(new CustomIntComparer()) // can be omitted
.Build();</pre>
<p>There is also <a href="https://github.com/NOtherDev/NOtherLookup#empty-ilookup">empty lookup</a> available for your convenience.</p>
<p>Last but not least, there are easy <a href="https://github.com/NOtherDev/NOtherLookup#converting-ilookup-fromto-idictionary">conversions between <tt>ILookup</tt> and <tt>IDictionary</tt></a> available - it's a simple method call in both ways:</p>
<pre class="brush: csharp">ILookup<int, string> lookup = dict.ToLookup();
Dictionary<int, List<string>> backToDict = lookup.ToDictionary();</pre>
<p><b>For the detailed descriptions with examples, see <a href="https://github.com/NOtherDev/NOtherLookup/blob/master/README.md">README</a> on <a href="https://github.com/NOtherDev/NOtherLookup">GitHub project page</a></b>, where the code lives. NOtherLookup is licenced with very liberal MIT licence, so feel free to grab the code and use it any way you want. </p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com2tag:blogger.com,1999:blog-6038034264692630628.post-72131599764130580342014-01-12T23:23:00.002+01:002014-01-27T20:22:35.844+01:00The downsides of .NET's Lookups<p><a href="/2014/01/lookup-hidden-gem.html">In the previous post</a> I was admiring the wonderful features of .NET's <a href="http://msdn.microsoft.com/en-us/library/bb534291(v=vs.110).aspx">Lookup</a> - its immutability, null-safeness and clean and readable presence. I've also promised to look for its downsides or pitfalls one may encounter when using it.</p>
<p>Are there any? Well, to be honest, I can't see any serious problems with using Lookups. If you need a collection to be used as a lookup table by some identifier, with no modifications support, with multiple values per key - <b>you won't find anything better suited than <tt>ILookup</tt></b>.</p>
<p>There are just two things I find to be a slight impediments in ideal Lookup's world.</p>
<p>The first is that the <b>only way to create an <tt>ILookup</tt> instance is via LINQ's <tt><a href="http://msdn.microsoft.com/en-us/library/system.linq.enumerable.tolookup(v=vs.110).aspx">ToLookup</a></tt> method</b>. Although the framework provides the default public <tt>ILookup</tt> implementation - a <tt><a href="http://msdn.microsoft.com/en-us/library/bb460184(v=vs.110).aspx">Lookup</a></tt> class - it has no public constructor available. It means that in order to create our lookup, we need to prepopulate some other collection just to convert it later using <tt>ToLookup()</tt>. </p>
<p>The second thing is that the <b>beautiful clean API provided by <tt>ILookup<TKey, TValue></tt> goes away immediately if we need to do anything other than just read our lookup</b>. If we need to have a lookup created from existing lookup but with one more value, or have our lookup filtered, or joined with another, or whatever, it breaks into <tt>IEnumerable<IGrouping<TKey, TValue>></tt> - and the readability hurts a lot. <tt>IGrouping</tt> represents a single lookup item which is a values collection with its associated key. Still nice to access the data, but not nice to operate on. There's no public implementation of <tt>IGrouping</tt> available in the framework - the implementation is internal to LINQ. There's no way to modify the grouping, what is understandable as it is part of immutable <tt>ILookup</tt>, but there's also no way to create new grouping based on existing one - the workaround is to convert our whole lookup back to some mutable collection like <tt>Dictionary<TKey, TValue></tt>, mutate it and then convert to new lookup using the convoluted LINQ projections first, something like this:</p>
<pre class="brush: csharp">var temporaryDictionary = lookup.ToDictionary(x => x.Key, x => x.ToArray());
temporaryDictionary.Add("new key", new[] { "new value" });
var newLookup = temporaryDictionary
.SelectMany(kv => kv.Value, (kv, v) => new { kv.Key, Value = v })
.ToLookup(x => x.Key, x => x.Value);</pre>
<p>Or alternatively, if we don't mind losing existing key-values mappings and we are able to recreate it once again, we can flatten our lookup to <tt>List<TValue></tt>, modify the list and then use the simplest <tt>ToLookup()</tt> overload.</p>
<pre class="brush: csharp">var temporaryList = lookup.SelectMany(x => x).ToList();
temporaryList.Add("new value");
var newLookup = temporaryList.ToLookup(x => SomethingThatRecreatesKey(x));</pre>
<p>No matter which way we choose, it's quite a lot of work for such a simple thing.</p>
<p>Next time I'll try to <a href="http://notherdev.blogspot.com/2014/01/NOtherLookup-even-better-lookups.html">propose a solution</a> for both the issues. Stay tuned!</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com2tag:blogger.com,1999:blog-6038034264692630628.post-84167074416535126972014-01-06T15:03:00.000+01:002014-01-12T23:25:29.924+01:00Lookup - .NET's hidden gem<p>.NET Framework has a lot to offer when comes to generic data structures, especially collections. We have several lists, arrays, sets, dictionaries - each of them has its performance and functional characteristics and scenarios where it fits much better than the others. As I'm observing in the project I work in, <b>one of the most often forgotten yet powerful type of collection is <a href="http://msdn.microsoft.com/en-us/library/bb534291(v=vs.110).aspx">Lookup</a></b>. I think it needs more attention.</p>
<p><tt>ILookup</tt> was introduced to the framework as a part of LINQ and is exclusively used in LINQ context, as a result of <tt><a href="http://msdn.microsoft.com/en-us/library/system.linq.enumerable.tolookup(v=vs.110).aspx">ToLookup</a></tt> calls. Logically, it is a key-value collection like <tt>IDictionary</tt>, but it is <b>designed to hold multiple values in a single key</b>. Also, unlike <tt>Dictionary<K, V></tt>, it is <b>immutable</b>, what means that after creating the lookup, we can only read it - no modifications are possible.</p>
<p>Generally one may say that <tt>ILookup<K, V></tt> is no better than <tt>IDictionary<K, IEnumerable<V>></tt> and as such is redundant. But I'd strongly disagree. First of all, let's think how are dictionaries of collections used. From my experience, it's most often used to keep predefined static "dictionaries" that are loaded once and live long to provide easy access to some rarely changing values like what counties are there in each country etc. Well, these are even often called "lookup tables". Note that in most cases we don't ever change the stored values - it is immutable. We replace it as a whole periodically or when something changes. Doesn't it sound exactly like <tt>ILookup<K, V></tt> key features discussed above? We should always be <a href="/2011/11/use-tools-only-for-what-they-are-made.html">using the tools that are best suited for the job</a> to avoid reinventing the wheel and avoid the problems others already solved.</p>
<p>Moreover, <tt>ILookup</tt>'s definition is clearly <b>more readable than nested generics</b> in its dictionary-based equivalent. When I look at the code never seen before and I see <tt>ILookup<K, V></tt>, I instantly know that it is the lookup table and I feel its read-only, one-way nature. When I see <tt>IDictionary<K, IEnumerable<V>></tt> I need to consider the possibility that it is used as a read-write store for some values and I, as a user of this code, can (or should?) modify that collection. Using <tt>Lookup</tt> for lookups just makes sense.</p>
<p><tt>IDictionary</tt>-based lookups, as mutable data structures may kick us badly. I've once experienced quite an ugly error caused by passing around the plain <tt>List<T></tt> read from what was intended to be static lookup shared between users and was based on <tt>Dictionary<K, List<V>></tt>. The intention was to pass around the immutable lookup data, but the receiving code actually got mutable list being exactly the same reference that was sitting in the global lookup. The value passed was eventually customized for each user and there were miracles happening to the global lookup. All that would be just impossible with lookups.</p>
<p>With dictionaries that are open for modification and not thread-safe by default, we are also subject to multiple kinds of failures in multi-threaded scenarios. All of it is gone with immutable data structure like <tt>ILookup</tt>.</p>
<p>And last but not least - null checks. Who likes to litter his code with all that boring null checks? When using <tt>IDictionary</tt> as a lookup table, every piece of code that consumes its data need to check for key existence first or have a conditional statement with <tt>TryGetValue</tt> method, because calling <tt>dictionary[nonExistingKey]</tt> would throw dreadful <tt>KeyNotFoundException</tt>. Moreover, even if the key exists, the value may still legally be null, so to read the collection contained under the key we need to have two checks:</p>
<pre class="brush: csharp">if (dictionary.ContainsKey(theKey))
{
var collectionOrPossiblyNull = dictionary[theKey];
if (collectionOrPossiblyNull != null)
{
DoSomethingWithValues(collectionOrPossiblyNull);
}
}</pre>
<p>or, with more compactness but same complexity:</p>
<pre class="brush: csharp">T collection;
if (dictionary.TryGetValue(theKey, out collection) && collection != null)
DoSomethingWithValues(collection);</pre>
<p><tt>ILookup<K, V></tt> was designed totally differently around key existence and null values. It just <b>returns an empty collection for non-existent keys</b>. And thanks to its strictly controlled way of creating (by LINQ's <tt>ToLookup</tt> method only), we can also be sure that no null collection is possible. And the code above just looks as it should like:</p>
<pre class="brush: csharp">DoSomethingWithValues(lookup[theKey]);</pre>
<p>Isn't it beautiful?</p>
<p><a href="/2014/01/downsides-of-net-lookups.html">Next time</a> I'll try to find some pitfalls or inconveniences of using lookups.</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com4tag:blogger.com,1999:blog-6038034264692630628.post-52217371753683103582013-12-29T20:36:00.000+01:002013-12-29T20:36:06.200+01:00Me & Open Source - a New Year's Resolution<p>I am working as a software developer full-time for several years now. I've already written many thousands of lines of code, but nearly none of it is publicly available. Most of it remains a property of my employers for who the code was created, but there are always some pieces that are already open source or side projects that don't need to adhere to strict ownership rules and can be generalized enough to be shared with others.</p>
<p>I always wanted to share and I had hundreds of ideas for full-size projects that I could do when there's enough time. But in the reality, there was never enough time and motivation for anything big. Full-time job takes a lot of time, the rest goes to family and other duties. But there is always a few late evening hours each week that can be saved.</p>
<p>All the things I've shared with the community up to date were mentioned here on my blog. These were mostly short gists or snippets. The only larger thing was an <a href="/2012/03/loquacious-html-builder-based-on-xsd.html">HTML building API</a> based on the "loquacious" interface idea, published on <a href="https://github.com/NOtherDev/NOtherHtml">GitHub as NOtherHtml</a>. I still love the design! But I admit, it was rather one-timer, not a major contribution to the open source world.</p>
<p>So, here is my New Year's Resolution for 2014: try spending those few late evening hours on contributing to the open source projects. Yeah, I know no one cares, but I'm sharing it as posting it publicily will motivate me more than leaving it just as my internal thought. There will be probably no biggies for the reasons mentioned, I think I'll rather focus on bug fixes or small features for the projects I use. It's easiest, it's beneficial, it's fair. </p>
<p>As a start, yesterday I've submitted my first <a href="https://github.com/developwithpassion/developwithpassion.specifications/pull/8">pull request</a> to the "real" open source project I use - <a href="https://github.com/developwithpassion/developwithpassion.specifications">JP. Boodhoo's developwithpassion.specifications</a>. I've fixed a bug in the extension method that helps assert collections elements equality, but failed when the collections were of different lengths. I feel I'm already over the hump!</p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-33209717278823253462013-10-06T23:03:00.000+02:002013-10-06T23:03:12.550+02:00ASP.NET MVC: Current route values implicitly used by Url.Action<p>I really don't like when I work with statically-typed language like C# and still need to rely on strings or other loosely-typed constructs for things that may be strongly-typed. ASP.NET MVC went its way towards strongly-typing and now offers things like strongly-typed view models, but still lacks official built-in support for building internal links and URLs without using plain controller and action strings. </p>
<p>Fortunately, here is where MVC Futures comes in (available on NuGet for MVC <a href="http://www.nuget.org/packages/Mvc3Futures/">3</a> or <a href="http://www.nuget.org/packages/Mvc4Futures/">4</a>). It offers a nice <a href="http://maxtoroq.blogspot.com/2013/04/delegate-based-strongly-typed-url.html">expression-based extension methods</a> to build links (on <tt>HtmlHelper</tt>):<p>
<pre class="brush: csharp">Html.ActionLink<TheController>(c => c.TheAction(theParam))</pre>
<p>and URLs (on <tt>UrlHelper</tt>):</p>
<pre class="brush: csharp">Url.Action<TheController>(c => c.TheAction(theParam))</pre>
<p>It's working fine and it protects me from working with those ugly string-based methods and it is also a very convenient way to specify the target action parameters - in definitely nicer way then manually building the <tt>RouteValuesCollection</tt> required by those traditional methods.</p>
<p>But there is one not well-known feature of MVC (at least I haven't heard a lot about it) that for me seems to be disturbing here - <a href=" http://stackoverflow.com/questions/7133223/asp-net-mvc-url-action-adds-current-route-values-to-generated-url ">MVC is reusing the route values from the current request when building URLs</a> and no other value specified. This maybe makes some sense for the string-based methods (you don't need to build those <tt>RouteValuesCollection</tt>s over and over again), but it makes no sense for the expression-based approach as the compiler enforces specifying all the parameters explicitly in order to build a lambda expression that compiles.</p>
<p>And here comes the nasty bug I've recently spent some hours on - passing <tt>null</tt> value is like not passing a value at all. MVC puts <tt>null</tt> under the appropriate key in the <tt>RouteValuesCollection</tt>, it then goes down to <tt>Route.GetVirtualPath</tt> method, which also takes the current route values from the <tt>RequestContext</tt>. And then the evil happens - the low-level <tt>System.Web.Routing</tt>'s method <tt>ParsedRoute.Bind</tt> ignores the <tt>null</tt> value and - bang - it takes the value from the current request, if any accidentally matches by the parameter name.</p>
<p>It means that when trying to build an URL that passes parameter <tt>param</tt> equal to <tt>null</tt> and the current request accidentally have parameter named also <tt>param</tt> (regardless of its type or existence of any logical connection), the request's <tt>param</tt> value will be passed instead our explicitly demanded <tt>null</tt> value. And I can see no way to pass <tt>null</tt> in this case.</p>
<p>Actually, the bug (or feature?) exists in case of string-based URL builder methods, too. But here it is much more visible and obviously wrong.</p>
<p>The only way to fix that strange implicit parameter inheritance by name I know is to work around it - either by removing the name collision by renaming one of the parameters (yuck!) or by using own extension method.</p>
<p>I've created my own extension method to generate URLs/links that has the same signature as the buggy ones and placed it in the namespace more visible than those replaced (you'd better check your usings carefully). Here is the <tt>UrlHelper</tt> extension I have - <tt>HtmlHelper</tt> implementation generally calls <tt>UrlHelper</tt> and wraps its result in a link, so I'll omit it here. The method calls the same methods as the original method being replaced, but it tweaks the <tt>RequestContext</tt> instance: instead of passing the instance available in <tt>UrlHelper</tt> (which contains those conflicting route values from the current request), I'm creating new <tt>RequestContext</tt> reusing <tt>Route</tt> and <tt>IRouteHandler</tt> instances from the current request, leaving the actual route values empty. This way there's no possibility for the current values to "infect" our URL building process anymore. Interesting is that <a href="http://msdn.microsoft.com/en-us/library/system.web.routing.routedata.routedata.aspx"><tt>RouteData</tt> constructor</a> doesn't enforce the values being set, anyway.</p>
<pre class="brush: csharp">public static string Action<TController>(this UrlHelper helper, Expression<Action<TController>> action)
where TController : Controller
{
// we need to recreate RequestContext without values, to override the MVC "feature"
// that replaces null specified in action with value inherited from current request
var currentRouteData = helper.RequestContext.RouteData;
var fixedRouteData = new RouteData(currentRouteData.Route, currentRouteData.RouteHandler);
var fixedRequestContext = new RequestContext(helper.RequestContext.HttpContext, fixedRouteData);
var valuesFromExpr = Microsoft.Web.Mvc.Internal.ExpressionHelper.GetRouteValuesFromExpression(action);
var vpd = helper.RouteCollection.GetVirtualPathForArea(fixedRequestContext, valuesFromExpr);
return vpd == null ? null : vpd.VirtualPath;
}</pre>
<p>Well, it seems to be yet another example of the fact that <a href="http://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare"><tt>null</tt> is rather vague and problematic thing</a>. It can represent many different notions, depending on the context and implementation - like no value, empty value, inherited value etc. Althouth ASP.NET MVC treats <tt>null</tt> as "inherit the value" <a href="http://stackoverflow.com/questions/1049027/asp-net-mvc-strongly-typed-views-partial-view-parameters-glitch">not only in this case</a>, I'd always prefer being explicit about that kind of behaviors.</p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com5tag:blogger.com,1999:blog-6038034264692630628.post-78020159458479836772013-10-02T21:25:00.000+02:002013-10-02T21:25:06.494+02:00On Abstractions in Requirements<p>We, as a software developers, are used to work with abstractions and to strive to work with abstractions rather than with specific implementation details. We always try to build the abstraction layers on top of complicated rules and design our systems so that the details are hidden at low levels. The system, looking from some distance, is built from nice generic bricks.</p>
<p>But we're much less used to have our abstractions in line what our business requirements are and how the stakeholders look at it. We're often thinking about abstractions only on technical, source code level. But ideally, <b>if something is really a well-designed abstraction, it should serve everyone equally well</b>.</p>
<p><b>Know the abstractions the customers use</b></p>
<p>When building the software for a particular group of users, they probably have some ideas how things should work and how things relate to each other. Ask your customers not only for the details you need to have to complete your concrete implementation, but also ensure you know the bigger picture. If your customers define some rules, before implementing it, get the knowledge what these rules are about and what they represent. These rules probably result from some customers' business concept and it will appear in multiple requirements or ideas over and over again. Knowing the concept upfront will save some time finding duplicated rules and/or inventing the abstractions on your own. If not identified early enough, introducing it to the codebase later will probably mean conflicts with existing, made up abstractions.</p>
<p><b>Agree with customers on the abstractions</b></p>
<p>Don't introduce high-level abstraction for things that sounds similiar if your customers don't use those things together. When the product evolves, this kind of abstraction will probably end up as a gordian knot tying two things that are meant to go different ways. Eventually you'll have to remove the abstraction on the way (which may not be easy) or some dirty hacks will grow around it.</p>
<p><b>Talk with everyone about naming</b></p>
<p>Naming is one of the most difficult thing in programming, right? So why not rely on our customers. They probably know better how to name the things they are familiar with. Discuss the synonyms and agree on the naming before introducing the abstraction to the codebase. It may be a good idea to have a glossary of terms the customers use and how it's represented in the codebase so that different developers (but also managers, technical writers, customer support, etc. - essentially all the stakeholders) don't use another synonym for the same thing. This will avoid confusion and misunderstanding. Moreover, mapping of "documents" to "articles" through "news stories" will be weird, anyway.</p>
<p><b>Don't use details when there's an abstraction agreed</b></p>
<p>Whoever create the requirements for your project, they should be aware you know the concepts they use, so that they will use it in the communication, for everyone's convenience. Subsequent requirements should not repeat the definition of the abstraction in detail, but rely on the fact you already know it. Otherwise it will probably end up inconsistent or implemented locally, separately from the previously introduced abstractions. When the existing concept's definition turns out to be an obstacle and it needs to be clarified with details later in the project's development, it probably means that either there is no real abstraction there or the abstraction is not defined or understood correctly by all parties.</p>
<p><b>Make sure you have proper number of well understood abstractions</b></p>
<p>Abstractions should be - well - abstract enough. It means that it should be easy to grasp it and explain it to a new team member or new customer representative. If it's not - it's probably not a high-level abstraction. There are also practical limits how many high-level abstractions make sense in the project. If there are too many, so that they can't be learned easily, either the project is overcomplicated or - again - these are not really high-level abstractions.</p>
<br/>
<p>This all may sound obvious, but I've recently experienced it isn't. In our project, there was a high-level decision made that we need to have our content categorized. The idea of the categories went through several layers of business analysts and subject-matter experts and before it get to the development team, it became fuzzy and the general concept flittered away. We get nothing about categories, but a bunch of seemingly unrelated yet not trivial requirements to "move things around here and there". We implemented it as so rather thoughlessly. And some time later there was a business decision that things should work together as they are related. But that "related" thing doesn't exist for us at all - we have several own concepts of "relations" instead, grown on the developers' need for abstractions. And now we need to go few big steps back to align. Oops!</p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com0tag:blogger.com,1999:blog-6038034264692630628.post-11161114115425713042013-09-25T21:52:00.000+02:002013-09-25T21:52:36.906+02:00The Alternative Development Methodologies Glossary<p>Have you ever wondered what all those TDDs, BDDs, DDDs, ATDDs mean? Are you sure your project is following all these methodologies that form the software development mainstream? Do you know what to do to stay up to date?</p>
<p>Here is the comprehensive and exhaustive list of various established development techniques that you can meet (or have a chance to introduce) in your project. Feel free to be inspired and use as many -DDs as you can!</p>
<dl>
<dt>ADD: Anticipation Driven Development</dt>
<dd>When the requirements are carefully hidden in stakeholders' minds and not shared with the development team until the deadline. In that methodology <a href=" http://geek-and-poke.com/geekandpoke/2013/7/22/future-proof-your-data-model">developers need to be ready for everything</a>.</dd>
<dt>BDD: Bug Driven Development</dt>
<dd>Delivering anything as quick as possible and explaining all the missing features as "just bugs". Then fixing it <a href="http://geek-and-poke.com/geekandpoke/2013/8/13/coders-part-213">using a mix of magic and dirty hacks</a>.</dd>
<dt>CDD: Copy-Paste Driven Development</dt>
<dd>Introducing new feature to the system is easy as long as the feature may be put together from the borrowed parts of other features. <a href="http://en.wikipedia.org/wiki/Coupling_(computer_programming)">Low coupling</a>, sort of.</dd>
<dt>DDD: Development Driven Development</dt>
<dd>Have you ever had an impression that the project's code just live its own life?</dd>
<dt>EDD: Email Driven Development</dt>
<dd>As a productivity improvement technique, developers are expected to reduce wasting time in those ugly black IDEs and keep focused on the real stuff being discussed via email. 60 times a day.</dd>
<dt>FDD: Frustration Driven Development</dt>
<dd>After delivering the final requirements, implementing and making the code well-factored and production-ready, the new final (finaler) requirements arrive, without changing the deadline. And then, two weeks later, there are the finalest requirements.</dd>
<dt>GDD: Google Driven Development</dt>
<dd>Let's google it up. All the code is already there, for sure.</dd>
<dt>HDD: Hardcoding Driven Development</dt>
<dd><a href="http://www.codinghorror.com/blog/2004/10/kiss-and-yagni.html">KISS and YAGNI</a> plus being extremely explicit everywhere. So explicit that any <a href="http://thedailywtf.com/Articles/Now-Wait-for-Next-GetYear().aspx">business logic</a> or <a href="http://thedailywtf.com/Articles/The-Page-at-Fault.aspx">calculations</a> makes a smell. May also be the Test Driven Development that went too far (the code works only for cases used in the tests, because all the answers are hardcoded).
<dt>IDD: Idiot Driven Development</dt>
<dd>Self-explaining. But don't use that name when talking to your boss.</dd>
<dt>JDD: Job Security Driven Development</dt>
<dd>The primary goal each developer seems to have is to become indispensable. It's easy to do when adhering to several <a href="http://thc.org/root/phun/unmaintain.html">simple rules</a>. That technique encourages great productivity of developers (when counted in lines of code written).</dd>
<dt>KDD: Knot Driven Development</dt>
<dd>From the <a href="http://en.wikipedia.org/wiki/Gordian_Knot">Gordian Knot</a> - when the <a href="http://en.wikipedia.org/wiki/Cyclomatic_complexity">cyclomatic complexity</a> is higher than the number of lines. And when <a href="http://thedailywtf.com/Articles/Breaking-Broken.aspx">all</a> <a href="http://thedailywtf.com/Articles/Who_0x27_s_Hard-Core_Now_0x3f_.aspx">the</a> <a href="http://thedailywtf.com/Articles/The-Page-at-Fault.aspx">special</a> <a href="http://thedailywtf.com/Articles/Nine-Ways-to-Tuesday.aspx">cases</a> are covered with enough attention and care.</dd>
<dt>LDD: Layering Driven Development</dt>
<dd>The technique focused on <a href="http://geek-and-poke.com/geekandpoke/2013/7/13/foodprints">proper code encapsulation and isolation</a> - each developer creates his/her own layer of abstraction. Who wants to work with that legacy code, anyway?</dd>
<dt>MDD: Meeting Driven Development</dt>
<dd>Methodology that relies on the premise that each development problem may be solved by discussing it over and over regularly in the adequately large group for the adequately long time. And if it still doesn't succeed, invite even more managers.</dd>
<dt>NDD: Not Driven Development</dt>
<dd>IDD with the principle of team <a href="http://www.scrumalliance.org/community/articles/2013/january/self-organizing-teams-what-and-how">self-organization</a>.</dd>
<dt>ODD: Outlook Driven Development</dt>
<dd>See EDD. Syntax coloring plugin for Outlook would be nice. And also source control plugin, to be able to commit the code by email, finally.</dd>
<dt>PDD: Patch Driven Development</dt>
<dd>When the development is time-constrained, with fixed date, anything should be always deployed to production on time, regardless on its development status. Then there's a patching phase, that is trying to understand what was deployed and replacing it with working bits by some <a href="http://en.wikipedia.org/wiki/Shotgun_surgery">shotgun surgeries</a>.</dd>
<dt>QDD: Quick-Fix Driven Development</dt>
<dd>Mix of Patch Driven Development, TODO Driven Development and <a href="http://geek-and-poke.com/geekandpoke/2013/7/28/tdd">some hope</a>.</dd>
<dt>RDD: Refukctoring Driven Development (officially: Refactoring Driven Development)</dt>
<dd>When it's not possible to isolate the crappy code written last year (see FDD), comment it out and leave with "TODO" flag. Then write it once again.</dd>
<dt>SDD: Showcase Driven Development</dt>
<dd>Don't do anything unless there's a customer showcase tomorrow. Then rigorously apply QDD and TDD.</dd>
<dt>TDD: TODO Driven Development</dt>
<dd>Project planning methodology that assumes the existence of some time in the project's future when there will be no new feature requests or bugs and the developers will have a lot of time to reduce the technical debt.</dd>
<dt>UDD: Utopia Driven Development</dt>
<dd>Pragmatics' name for TODO Driven Development.</dd>
<dt>VDD: Versatility Driven Development</dt>
<dd>The methodology about developing a solution for every single customer need (including those not yet discovered) at once, by a single project. Also the way of writing code so that it can be re-configured to do other things in the future, if the need arises. Also known as <a href="http://thedailywtf.com/Articles/Soft_Coding.aspx">Soft Coding</a>.</dd>
<dt>WDD: Wheel-Reinventing Driven Development</dt>
<dd>The concept based on natural and inborn lack of trust for others' code, including guys next room. Instead of taking a risk of using others' code, let's write it from the scratch again (and use both versions in the same project in the same time). Sort of loose coupling, again.</dd>
<dt>XDD: Xero Driven Development</dt>
<dd>Mutation of CDD. The only features possible to implement are those like already implemented. We need to have some strong foundations to copy from, right?</dd>
<dt>YDD: Yesterday Driven Development</dt>
<dd>Pretty broadly used <a href="http://geek-and-poke.com/geekandpoke/2013/6/6/dont-ask-your-boss">time management methodology</a> in software development. All tasks are due two days before tomorrow.</dd>
<dt>ZDD: Zen Driven Development</dt>
<dd>Software development and mind training at once. Occurs when the codebase requires "<a href="http://en.wikipedia.org/wiki/Zen"><i>the attainment of enlightenment and the personal expression of direct insight</i></a>" to work with.</dd>
</dl>
<p>Let me know if I forgot anything from the mainstream.</p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com1tag:blogger.com,1999:blog-6038034264692630628.post-44839594243049535372013-07-11T22:36:00.001+02:002013-07-13T19:55:21.272+02:00Liskov Substitution Principle vs. immutabilityToday I had a very interesting discussion about <a href="http://en.wikipedia.org/wiki/Liskov_substitution_principle">Liskov Substitution Principle</a> (LSP) - what it really means and how to avoid breaking it, of course using an overused example of squares and rectangles.
Liskov's principle tells us to design our inheritance trees or interface implementations in such a way that whenever the code uses the base class (or interface), any of the derived classes (implementations) can be used without breaking existing behavior or invariants. In wider understanding, it also means that whenever we use an abstraction like base class or interface, we <b>should not expect any particular implementation provided in the runtime nor should we know anything about that implementations</b>. This also forbids any constructs like conditions based on implementation's concrete type - if we need it, we're working with <a href="http://codebetter.com/jeremymiller/2006/01/18/leaky-abstractions-and-the-last-responsible-moment-for-design/">leaky abstraction</a> that need to be rethought and not hacked around.
<pre class="brush: csharp">void MethodThatGetsAbstraction(IAbstraction abstraction)
{
if (abstraction is SomethingReal)
YouReDoingItWrong();
}</pre>
Let me remind the classical example of shapes I've mentioned before. As we know, squares are rectangles, and we often represent <a href="http://msdn.microsoft.com/en-us/library/27db6csx(v=vs.80).aspx">"is a" relationships</a> like this in code using an inheritance - <tt>Square</tt> derives from <tt>Rectangle</tt> (I don't want to discuss here whether it makes sense or not, but the fact is this IS the most often quoted example when talking about LSP). A rectangle, mathematically speaking, is defined by its side sizes. We can represent these as properties in our <tt>Rectangle</tt> class. We also have a method to calculate an area of our rectangle:
<pre class="brush: csharp">public class Rectangle
{
public int Height { get; set; }
public int Width { get; set; }
public int CalculateArea()
{
return Height * Width;
}
}</pre>
Now enter the <tt>Square</tt>. It is a special case of rectangle that has both sides of equal lengths. We implement it by setting the <tt>Height</tt> to equal <tt>Width</tt> whenever we set <tt>Width</tt> and opposite. Now consider the following test:
<pre class="brush: csharp">class When_calculating_rectangle_area
{
[Test]
public void It_should_calculate_area_properly()
{
// arrange
var sut = new Rectangle();
sut.Width = 2;
sut.Height = 10;
// act
var result = sut.CalculateArea();
// assert
Assert.Equal(result, 20);
}</pre>
It works perfectly fine until, according to the ability given us by the Liskov's principle, we change <tt>Rectangle</tt> into its derived class, <tt>Square</tt> (and leave the rest unchanged, again according to the LSP). Note that in real-life scenarios we rarely create the <tt>Rectangle</tt> a line above, <b>we got it created somewhere and we probably don't even know its specific runtime type</b>.
<pre class="brush: csharp">class When_calculating_square_area
{
[Test]
public void It_should_calculate_area_properly()
{
// arrange
var sut = new Square();
sut.Width = 2; // also sets Height behind the scenes
sut.Height = 10; // also sets Width behind the scenes, overriding the previous value
// act
var result = sut.CalculateArea();
// assert
Assert.Equal(result, 20); // naah, we have 100 here
}</pre>
<tt>Square</tt> inheriting from <tt>Rectangle</tt> breaks the LSP badly by <b>strenghtening the base class' preconditions in a subtype</b> - base class doesn't need an equal sides, it's the implementation-specific whim we don't even know about if we're only providing the base class to be derived in the wild.
But what is the root cause of that failure? It's the fact that <tt>Square</tt> is not able to enforce its invariants (both sides equal) properly. It couldn't yell by throwing an exception whenever one tries to set the height differently than width, because one may set width earlier than height - and we definitely don't want to enforce particular property setting order, right?. Setting width and height atomically would solve that problem - consider the following code:
<pre class="brush: csharp">public class Rectangle
{
private int _height;
private int _width;
public virtual int SetDimensions(int height, int weight)
{
_height = height;
_width = width;
}
public int CalculateArea()
{
return _height * _width;
}
}</pre>
public class Square : Rectangle
{
public override int SetDimensions(int height, int weight)
{
if (height != width)
throw new InvalidOperationException("That's a weird square, sir!");
_height = _width = height;
}
}
But now we've replaced one LSP abuse with another. How the poor users of <tt>Rectangle</tt> can be prepared for <tt>SetDimensions</tt> throwing an exception? Again, it's an implementation-specific weirdness, we don't want our abstraction to know about it, as it will become a pretty leaky one.
But is width and height really a state? I'd rather say they are shape's <b>identity</b> - if they change, we are talking about another shape than before. So <b>why expose setting possibility at all</b>? This leads us to the simple but extremely powerful concept of <a href="http://blogs.msdn.com/b/ericlippert/archive/tags/immutability/">immutability</a> - a concept in which the object's data once set cannot be changed and are read-only. The only way to populate an immutable object with data is to do it when creating the object - in its constructor. Note that <a href="http://www.blackwasp.co.uk/ConstructorInheritance.aspx">constructors are not inherited</a> and when <tt>new</tt>-ing a <tt>Rectangle</tt> we are 100% sure it's not <tt>Square</tt> or any other unknown beast, and conversely, when constructing <tt>Square</tt> we can enforce our invariants without an influence on base class behaviours - as all properly constructed <tt>Square</tt>s will be valid <tt>Rectangle</tt>s.
<pre class="brush: csharp">public class Rectangle
{
private int _height;
private int _width;
public Rectangle(int height, int weight)
{
_height = height;
_width = width;
}
public int CalculateArea()
{
return _height * _width;
}
}
public class Square : Rectangle
{
public Square(int size)
: base(size, size)
{
}
}</pre>
LSP satisfied, code clean, nasty bugs and leaky abstractions removed - immutability is king!
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com7tag:blogger.com,1999:blog-6038034264692630628.post-903894357566790502013-07-04T23:17:00.000+02:002013-07-04T23:17:30.692+02:00Code Smells: Rigidity<p>I was recently preparing a small talk about several code smells, as discussed by <a href="https://twitter.com/unclebobmartin">Robert C. Martin</a> in his famous <a href="http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882">'Clean Code'</a> book. I've decided to share my notes - it's nothing revolutionary, a bit of copy-paste from well-known resources and a bit of my personal comments and understandings.</p>
<p>The one for today is <b>rigidity</b>. Rigidity <a href="https://www.vocabulary.com/dictionary/rigidity">literally means</a> "the inability to change or be changed to fit changed circumstances". And that nicely fits what the smell is about: "<b>The software is difficult to change. A small change causes a cascade of subsequent changes</b>". We all know it - some tasks that sounded easy at the beginning became a few weeks of bug hunting or turned out to be too difficult to finish without destroying everything around.</p>
<p><b>When it happens?</b></p>
<ul><li>When the code is written in procedural style - a.k.a. "everything in one place" - with a lot of very narrow and specific, possibly unrelated conditions and special cases handling included directly in the "main" method,</li><li>When there is a lack of abstractions - meaning that the code operates on the low level of technical details instead of focusing on more real-life concepts, like bit flags fiddling to check for object's characteristics instead of having it available as a method or a property,</li><li>When the code implements some general concepts, but defines it with details specific to particular use case - like a code to render an HTML table from the general matrix that also decides the table header has dark background,</li><li>When a single responsibility is spread between many classes or code layers - like checking user's permissions on every layer from the data access layer up to the UI layer,</li><li>When components need to know a lot of details about each other to communicate properly (<a href="http://en.wikipedia.org/wiki/Leaky_abstraction">leaky abstractions</a>).</li></ul>
<p><b>How to avoid it?</b></p>
<ul><li>Thinking in general terms, wrapping the implementation details into abstractions that are understood on less technical levels,</li><li>Creating code with satisfying the requirements first in mind, not what API's we're using or the constraints we have around,</li><li>Using object-oriented techniques and <a href="http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)">SOLID</a> - with its small classes, single responsibilities and <a href="http://codeofrob.com/entries/my-relationship-with-solid---the-big-o.html">open design</a>,</li><li>Encapsulating the code in the logical pieces, defining a clear boundaries and interfaces between them - if you can't tell what is the border of responsibilities, it's probably done wrong,</li><li><a href="http://simpleprogrammer.com/2010/11/13/basic-to-basics-what-is-dependency-inversion-is-it-ioc-part-1/">Depending upon abstraction, not implementation</a>.</li></ul>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com1tag:blogger.com,1999:blog-6038034264692630628.post-54957996719722034912013-05-30T21:18:00.002+02:002013-05-30T21:18:58.478+02:00What's planned for NHibernate 4?<p>Yesterday Ricardo Peres wrote <a href="http://weblogs.asp.net/ricardoperes/archive/2013/05/29/the-state-of-entity-framework-and-nhibernate.aspx">an article about the state of NHibernate development</a>, comparing it to Entity Framework and noticing that <b>NHibernate's future doesn't look bright</b>. It's pretty much in line with what I meant asking if <a href="/2012/09/is-nhibernate-dead.html">NHibernate is already dead</a> few months ago. But, as I'm still actively using NHibernate in my project and there are no prospects of changing that, it would be nice if that's not the case.</p>
<p>Although there's <a href="https://nhibernate.jira.com/browse/NH#selectedTab=com.atlassian.jira.plugin.system.project%3Aroadmap-panel">no clear roadmap available</a>, it seems that NHibernate's community is planning a major release, bumping the version number to 4. The reason for that is pretty simple - the product is going to <a href="https://nhibernate.jira.com/browse/NH-3164"><b>switch from .NET 3.5 to 4</b></a>, mostly to <a href="https://nhibernate.jira.com/browse/NH-3165">retire Iesi.Collections dependency</a> that provided set implementations that are now partially available in the BCL. This is a major breaking change and that justifies changing the major version number, for sure. Some <a href="https://nhibernate.jira.com/browse/NH-3346">obsoletes</a> will be cleaned up in the same time.</p>
<p><b>Are there any new features planned for 4.0?</b> Well, <a href="https://nhibernate.jira.com/issues/?jql=fixVersion%20%3D%20vNext%20AND%20project%20%3D%20NH">there are some</a>, but I can't say the list is impressive. <a href=" https://nhibernate.jira.com/browse/NH-3166">SQL Server 2012 will be supported better</a> (sequences, limit feature), <a href="https://nhibernate.jira.com/browse/NH-3284">Ingres9 will be also supported</a> out of the box and few other dialects will have some improvements. But, let's admit it, this is not a big deal for existing users. There are of couse several bugfixes and minor improvements done and planned for that version, i.e. in Linq provider (<a href="https://nhibernate.jira.com/browse/NH-3186">NH-3186</a>, <a href="https://nhibernate.jira.com/browse/NH-2852">NH-2852</a>, <a href="https://nhibernate.jira.com/browse/NH-3056">NH-3056</a>, <a href="https://nhibernate.jira.com/browse/NH-2915">NH-2915</a>, <a href="https://nhibernate.jira.com/browse/NH-3256">NH-3256</a>), in mapping-by-code (<a href="https://nhibernate.jira.com/browse/NH-3280">NH-3280</a>, <A href="https://nhibernate.jira.com/browse/NH-3133">NH-3313</a>). There is also one possibly significant <a href="https://nhibernate.jira.com/browse/NH-3382">performance improvement</a> with Linq queries done.</p>
<p>The draft version of <a href="https://nhibernate.jira.com/secure/ReleaseNote.jspa?projectId=10000&version=10740">release notes</a> is available as a reference for all JIRA issues already resolved. Unfortunately, I can see <b>no "flag ship" feature</b> that might interest, majority of the list is rather maintenance of existing features or cleanup. Definitely not enough to encourage more interest in NHibernate, leaving the framework targeting breaking change the only thing that really justifies releasing this as 4.0 and not as 3.4.</p>
<p>And to make things clear, I'm not complaining that current contributors are lazy. I think <a href="http://hazzik.ru/">Hazzik</a> and <a href="http://stackoverflow.com/users/1141275/oskar-berggren">Oskar Berggren</a> are doing fantastic job keeping things up and running. The problem is that they are <a href="https://www.ohloh.net/p/nhibernate/contributors?sort=latest_commit&time_span=30+days">the only really active members</a> of the community. And the size and complexity of the project is big enough to make gaining new contributors really hard, for sure. I've tried to fix one bug once - and I failed. </p>
<p>Anyway, my fingers are crossed in hope of new great features in NHibernate, but, looking objectively, I don't expect anything like that to happen.</p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com10tag:blogger.com,1999:blog-6038034264692630628.post-25332338300317876482013-04-15T09:00:00.000+02:002013-04-15T09:00:23.483+02:00NHibernate's LINQ GroupBy capabilities<p>Recently in the project that is using NHibernate 3.2, I needed to use some aggregations in my database queries. The use case was pretty typical - aggregate some pre-filtered set of invoices by the product sold, count how many sales were there for each product, order the data by total sales value and take top 10 results. It is pretty easy to accomplish in SQL:</p>
<pre class="brush: sql">SELECT TOP 10 Product, COUNT(*) AS SaleCount, SUM(Value) AS TotalValue
FROM Invoices
WHERE Cancelled = 0
GROUP BY Product
ORDER BY TotalValue DESC</pre>
<p>It is also pretty easy to express in LINQ syntax:</p>
<pre class="brush: csharp">Invoices.Where(i => i.Cancelled == false)
.GroupBy(i => i.Product)
.Select(g => new TopSellingProduct
{
Product = g.Key,
SaleCount = g.Count(),
TotalValue = g.Sum(i => i.Value)
})
.OrderByDescending(g => g.TotalValue)
.Take(10);</pre>
<p>I knew that NHibernate's LINQ provider offers limited support for <tt>GroupBy</tt> operator. Taking into consideration that all the lambda expressions in the query are in fact <a href="http://msdn.microsoft.com/en-us/library/bb397951.aspx">expression trees</a> that need to be parsed and expressed in SQL, what I expected to be the most problematic, was the <tt>Select</tt> clause that creates new <tt>TopSellingProduct</tt> instances (which is not a NHibernate-managed entity) and sets its properties, in case of <tt>Sum</tt> even using nested lambdas. Actually, this was not a problem at all, even when using anonymous types inside - impressive! NHibernate somehow gets the list of fields and aggregation functions that needs to be fetched and turns it into <tt>SELECT</tt> clause correctly.</p>
<p>But the query above couldn't be translated into SQL anyway. It turned out that the operators that seemed easier to implement - <tt>OrderBy</tt>, <tt>Take</tt> and <tt>Skip</tt> - were not supported. So <b>with NHibernate 3.2, I could only create an aggregation and fetch all the aggregated values at once, without ordering or paging</b>. In my case, it could mean fetching 50k rows just to show top 10. Not an option.</p>
<p>Fortunately, quick search through the <a href="https://nhibernate.jira.com/browse/NH">NHibernate's JIRA dashboard</a> gave me the hope that things look better with the newer NHibernate version - 3.3.1. I've upgraded seamlessly using <a href="http://nuget.org/packages/nhibernate">NuGet</a>, and here is the summary of my observations:</p>
<table>
<tr><th>SQL feature</th><th style="width: 45%">LINQ syntax example</th><th>NHibernate 3.2</th><th>NHibernate 3.3.1</th></tr>
<tr class="odd"><td><tt>SELECT</tt> of simple aggregated value; <tt>COUNT()</tt> function</td><td><tt>.GroupBy(x => ...).Select(g => g.Count())</tt></td><td style="color: #006600">OK</td><td style="color: #006600">OK</td></tr>
<tr><td><tt>SELECT</tt> of anonymous class</td><td><tt>.GroupBy(x => ...)</tt><br/><tt> .Select(g => new { g.Key, Count = g.Count() })</tt></td><td style="color: #006600">OK</td><td style="color: #006600">OK</td></tr>
<tr style="background-color: #fff"><td><tt>SELECT</tt> of named class</td><td><pre>.GroupBy(x => ...)
.Select(g => new MyType
{
Key = g.Key,
Count = g.Count()
})</pre></td><td style="color: #006600">OK</td><td style="color: #006600">OK</td></tr>
<tr><td><tt>SUM(), MIN(), MAX()</tt> functions</td><td><pre>.GroupBy(x => ...)
.Select(g => new
{
Sum = g.Sum(x => ...),
Min = g.Min(x => ...),
Max = g.Max(x => ...)
})</pre></td><td style="color: #006600">OK</td><td style="color: #006600">OK</td></tr>
<tr class="odd"><td><tt>AVG()</tt> function</td><td><tt>.GroupBy(x => ...).Select(g => g.Avg(x => ...))</tt></td><td style="color: #ff0000">buggy, truncates value to <tt>int</tt> (<a href="https://nhibernate.jira.com/browse/NH-2429">NH-2429</a>)</td><td style="color: #006600">OK</td></tr>
<tr><td><tt>WHERE</tt> (condition applied before aggregation)</td><td><tt>.Where(x => ...).GroupBy(x => ...)</tt></td><td style="color: #006600">OK</td><td style="color: #006600">OK</td></tr>
<tr class="odd"><td><tt>HAVING</tt> (condition applied after aggregation)</td><td><tt>.GroupBy(x => ...).Where(g => ...)</tt></td><td style="color: #ff0000">silent failure, produces subquery instead of <tt>HAVING</tt> clause and returns wrong results (<a href="https://nhibernate.jira.com/browse/NH-2833">NH-2883</a>)</td><td style="color: #006600">OK</td></tr>
<tr><td><tt>ORDER BY</tt> (sorting)</td><td><tt>.GroupBy(x => ...).OrderBy(g => ...)</tt></td><td style="color: #ff0000"><tt>MismatchTreeNodeException</tt> (<a href="https://nhibernate.jira.com/browse/NH-2781">NH-2781</a>)</td><td style="color: #006600">OK</td></tr>
<tr class="odd"><td><tt>TOP / LIMIT</tt> (number of results)</td><td><tt>.GroupBy(x => ...).Take(10)</tt></td><td style="color: #ff0000"><tt>NotImplementedException</tt></td><td style="color: #006600">OK</td></tr>
<tr><td><tt>OFFSET</tt> (paging support)</td><td><tt>.GroupBy(x => ...).Skip(10)</tt></td><td style="color: #ff0000"><tt>NotImplementedException</tt></td><td style="color: #006600">OK</td></tr>
</table>
<p><b>Things look MUCH better now</b> - everything what I need (and a bit more) is correctly supported with the newest LINQ provider.</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com7tag:blogger.com,1999:blog-6038034264692630628.post-50527913144123542662013-04-08T23:06:00.001+02:002013-04-08T23:09:51.459+02:00NHibernate Equals implementation with proxies vs. ReSharper - or yet another couple of hours lost<p>I'm already quite sensitive to <a href="http://stackoverflow.com/questions/5851398/nhibernate-reasons-for-overriding-equals-and-gethashcode">missing <tt>Equals</tt> and <tt>GetHashCode</tt> implementations</a> for NHibernate entities or value types that caused issues like re-inserting or duplicating items within collections hundred times.</p>
<p>A good rule for <tt>Equals</tt> and <tt>GetHashCode</tt> (that is <a href="http://www.nhforge.org/doc/nh/en/#persistent-classes-equalshashcode">clearly stated in the documentation</a>) is to make it do comparisons based on the set of fields that create "business" (real-life) identification of an object whenever possible - and not on database-level identifiers. It works well also when comparing objects from different sessions (detached) or not yet persisted (transient) - without any <a href="http://stackoverflow.com/questions/5686604/equals-implementation-of-nhibernate-entities-unproxy-question">"unproxying" magic</a>.</p>
<p>Today, my personal counter of hours devoted into cursing and fighting NHibernate-related corner cases or strange issues increased once again. I have an interesting (two hours later - frustrating) case of entities being mixed up in spite of correct SQL queries issued. And I was quite confident that my <a href="http://www.jetbrains.com/resharper/webhelp/Code_Generation__Equality_Members.html">ReSharper-generated</a> <tt>Equals</tt> and <tt>GetHashCode</tt> methods were correct and the root cause was somewhere else. Just look how simple was the code of my entity:</p>
<pre class="brush: csharp">public class City
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual bool Equals(City other)
{
if (ReferenceEquals(null , other))
return false ;
if (ReferenceEquals(this , other))
return true ;
return Equals(other.Name, Name);
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null , obj))
return false ;
if (ReferenceEquals(this , obj))
return true ;
if (obj.GetType() != typeof (City))
return false ;
return Equals((City ) obj);
}
public override int GetHashCode()
{
return (Name != null ? Name.GetHashCode() : 0);
}
}</pre>
<p>I am comparing <tt>City</tt> instances using a natural key - its name. In the faulty code I was querying the database for different entities that reference <tt>City</tt> and later the referenced cities were compared for equality. Here is the simplified sketch of the test case (written in <a href="https://github.com/machine/machine.specifications">Machine.Specifications</a>) with initialization that creates the object graph and two fetching scenarios as a separate tests.</p>
<pre class="brush: csharp">public class EqualsTest : DatabaseTests
{
Establish context = () => sut.WithinSessionAndTransaction(sess =>
{
var city = new City() { Name = "Llanfairpwllgwyngyll" };
sess.Persist(city);
sess.Persist( new Address () { City = city });
sess.Persist( new District () { City = city });
});
It should_have_equal_cities = () => sut.WithinSessionAndTransaction(sess =>
{
var address = sess.Query<Address>().Single();
var district = sess.Query<District >().Single();
district.City.ShouldEqual(address.City);
});
It should_correctly_use_city_in_lookup = () => sut.WithinSessionAndTransaction(sess =>
{
var address = sess.Query<Address >().Single();
var districts = sess.Query<District >().ToLookup(x => x.City);
districts[address.City].ShouldNotBeEmpty();
});
}</pre>
<p>In the first test I'm doing the direct comparison of cities, in the second one I'm creating a <a href="http://msdn.microsoft.com/en-us/library/bb460184.aspx">lookup table</a> with <tt>City</tt> instance as a key. In both cases lazy loading takes place so I'm working with NHibernate-generated <tt>City</tt> proxies. But it should not be a problem, as <a href="http://www.nhforge.org/doc/nh/en/#architecture-states"><b>NHibernate guarantees object identity within a single session</b></a>, right?</p>
<p>Well, uhm, the first test passes as expected, but the second test fails! It turned out that the <tt>City</tt> instance used as the key in <tt>districts</tt> (proxy instance) <b>does not maintain identity</b> with the proxy instance fetched for <tt>Address</tt> instance, even if they are both pointing at the same (single) city and are used within single session!</p>
<p>I'm not sure why this happens and I'm pretty confident it shouldn't, but fortunately the workaround is quite easy. As the proxies instances used in <tt>Address</tt> and <tt>District</tt> instances are now different references, they are compared using the <tt>Equals</tt> method we've provided. When <tt>Equals</tt> (or, more precisely, <tt>GetHashCode</tt>) is called on one of the objects to compare it with the second one, lazy fetch from the database is performed and it becomes the "real", unproxied object. But the second one doesn't - it is still <tt>CityProxy</tt> instance. And <tt>Equals</tt> offered by ReSharper, when checking objects types, unfortunately expects the type to exactly match:</p>
<pre class="brush: csharp"> if (obj.GetType() != typeof (City))
return false ;</pre>
<p>But <tt>obj.GetType()</tt> is a <tt>CityProxy</tt> and we're <b>exiting the comparison with the negative result</b> here. The workaround is just to replace that exact check with more semantic one, checking only whether <tt>obj</tt> can be treated as a <tt>City</tt> instance:</p>
<pre class="brush: csharp"> if (!(obj is City))
return false ;</pre>
<p>In this case, nor <tt>City</tt> neither <tt>CityProxy</tt> are eliminated, NHibernate can continue to compare city names and see that the proxy points to the same objects. This simple change done - and voilà - we have two tests passed!</p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com7tag:blogger.com,1999:blog-6038034264692630628.post-3129927216283959782013-02-18T09:30:00.000+01:002013-02-18T09:30:06.377+01:00[Obsolete] should be obsolete<p><a href="http://msdn.microsoft.com/en-us/library/aa664623(v=vs.71).aspx"><tt>[Obsolete]</tt> attribute</a> is a pretty useful concept in the .NET Framework, designed to mark parts of our code that is planned to be phased out and its usage should be avoided. It makes a lot of sense to mark methods or types as obsolete, e.g. when we drop support for some parts of our API and the code is planned to be removed when all the customers are migrated, or when we've restructurized our code so that there's a newer approach for particular problem and we just had no time or possibility to upgrade all the usages, etc.</p>
<p><img border="0" src="http://1.bp.blogspot.com/-xyHcpRZFxB4/USE4t-JGMDI/AAAAAAAAC7k/tYE3SETDvok/s320/obsolete.png" style="float: right" />Recently, in the project I work in I've stumbled upon a pretty useful class, being part of our project-wide infrastructure and heavily used through the project, marked with an <tt>[Obsolete]</tt> attribute. Unfortunately, the parameterless overload, without a message, was used. I didn't know that part of code well, so I decided to consult with the author what does it mean for this class to be obsolete. Quick research through the revision history traced back to a developer who is no longer working in the project. Also, noone in the team could point me to an alternative approach I should use, most probably because there was none yet. </p>
<p>I've lost some time trying to figure out what's the reason of that spooky <tt>[Obsolete]</tt> attribute and I've only came to the conclusion that the developer who put it there must have been planning some redesign or was just unhappy with the class as it is. And even if I'm actually wrong and there is a valid reason for not using the class in question, I had no way to know that, just because someone was too lazy to leave a simple message in the attribute. Without the message, we're just having a pointless compiler warning whenever we're using that class.</p>
<p>To avoid such a confusing situations and a loss of time for our successors, teammates, or even for ourselves, I think we should never define the <tt>[Obsolete]</tt> attribute using its <a href="http://msdn.microsoft.com/en-us/library/0xwcsd3h.aspx">parameterless constructor</a>. In fact, in my opinion, it <b>should be marked as obsolete itself</b>. </p>
<p>We have such an easy <a href="http://msdn.microsoft.com/en-us/library/ccwf47a2.aspx">alternative</a>. Wouldn't it be nicer if I had encountered the attribute with a message provided? Like <tt>[Obsolete("Use class Foo instead.")]</tt> or <tt>[Obsolete("Support for LoremIpsum dropped as of v. 3.2")]</tt>?</p>Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com4tag:blogger.com,1999:blog-6038034264692630628.post-10110806481556281012013-02-14T09:30:00.000+01:002013-02-14T09:30:00.560+01:00NHibernate's test suite is a mess<p>I'm not doing it on purpose, I haven't converted into fighting NHibernate's foe, but it looks it will be <a href="http://notherdev.blogspot.com/2012/09/is-nhibernate-dead.html">another rather not encouraging post about NHibernate</a>. But when I saw it's so called unit tests suite, I couldn't resist on sharing my thoughts.</p>
<p>NHibernate is using <a href="http://teamcity.codebetter.com/project.html?projectId=project7&tab=projectOverview">CodeBetter's TeamCity</a>. We can see <a href="http://garrettsmith.net/blog/archives/2007/02/ignored_tests_a.html">more than 200 tests ignored</a> there. This doesn't bode well itself.</p>
<p>I've looked at NHibernate 3.2 source code I already had on the disk, but I've also compared it a bit with the current <a href="https://github.com/nhibernate/nhibernate-core">GitHub</a> version and the situation haven't changed drastically since then.</p>
<p>Looking at the scope of most of the tests, they are more integration than unit tests - they run a complete top-down calls. This means that it requires a deep knowledge of the system to tell what went wrong when test fails and virtually no chance for anyone not being deep inside the codebase to know how to fix it.</p>
<p>It's nothing wrong with integration testing, of course, but not if the recurring pattern is that they are just a <a href="http://en.wikipedia.org/wiki/Smoke_testing">smoke tests</a> - verifying only if there was an exception thrown or not. I've run the tool to find tests without assertions (that I'm going to publish some day) - and it turned out that there are <b>328 assertionless tests</b>! More than 7% of the whole suite doesn't actually verify any outcome. And it is for sure possible to remove a whole lot of the code, even <a href="http://jasonrudolph.com/blog/2008/06/17/testing-anti-patterns-incidental-coverage/">theoretically test-covered code</a>, and cause no test to fail!</p>
<p>But way more interesting are the tests that do not increase the code coverage even incidentally! I like this one:</p>
<pre class="brush: csharp">[TestFixture]
public class Fixture : BugTestCase
{
[Test]
public void Bug()
{
// Do nothing, we only want to check the configuration
}
}</pre>
<p>Note also the incredibly pure naming schema. There are 36 tests named <tt>Bug</tt> and 35 named just <tt>Test</tt> in the codebase and at least 80 test classes named <tt>Fixture</tt>. There is some kind of convention for namespaces, though, but with a lot of exceptions.</p>
<p>What else can we see in the NHibernate's "unit" test suite? Well, everything! We have a real file system access, real database access, time measurement, dependency on driver configuration etc. There is <a href="https://github.com/nhibernate/nhibernate-core/blob/master/src/NHibernate.Test/TestCase.cs">a lot of code which should be unit-tested itself</a>.</p>
<p>Those tests, no matter if we name it unit-, integration- or whatever-tests, are poorly factored and contain multiple asserts. Look at the <tt>MasterDetail</tt> test <a href="https://github.com/nhibernate/nhibernate-core/blob/master/src/NHibernate.Test/Legacy/MasterDetailTest.cs">here</a> - 180 lines of code and 40 asserts in single test! The longest class test I've encountered was <tt><a href="https://github.com/nhibernate/nhibernate-core/blob/master/src/NHibernate.Test/Legacy/FooBarTest.cs ">FooBarTest</a></tt> - it has 5729 lines (and has pretty nice name, hasn't it?).</p>
<p>And for the end, this is the line I liked the most:</p>
<pre class="brush: csharp">catch(Exception e)
{
Assert.IsNotNull(e); //getting ride of 'e' is never used compile warning
}</pre>
<p>Wow, we have an assertion! But, as the comment suggests, it was added only to shut the compiler up, not to verify anything valuable. Not to mention that there are better ways to cope with compiler warnings...</p>
<p>Anyway, if you'd ever be looking for a good unit test suite to learn something from, at least you know where definitively not to look!</p>
Adam Barhttp://www.blogger.com/profile/16605796098913600806noreply@blogger.com11