A colleague and I attended the VBUG Winter Conference on Tuesday 4th November. We arrived a bit late and so missed the welcome and housekeeping. So we headed straight into the (already started) keynote...

Keynote : Tips and tricks for succesful software teams by Roy Osherove (Roy@osherove.com)

This seemed to be a collection of thoughts, loosely tied together by a voting system - highest scores got presented. As we weren't there for the beginning, its a bit hard to determine how this worked but there seemed to have been some sort of voting on which of a list of topics should be talked about.

First up were ways that he has found of working successfully in a team:

  • Timebox iterations
  • Use of user stories
  • Use of whiteboard to show progress -it's big, it's visible (to all - team and visitors)
  • Use a backlog to hold all the requirements - the new requirements don't make it into current cycle
  • Bad tests better than no tests
  • Integration tests better than no tests
A "kind-of agile" aproach, but not strictly adhering to any of the true methodologies, a pick of the most suitable aspects from all of them, implemented little by little until having a process that worked. All sounds sensible so far...

Next he moved on to automated build tools, and mentioned a couple of tools, focussing specifically on Finalbuilder. This is a GUI tool with visual programming. It can do remote deployments, and is apparently easy to maintain. It can also be called from the command line with different parameters. They use it with TeamCity to manage the continuous integration aspect.

Next up was the advice "Learn to say No" probably more aimed at small business owners, but also relevant to all - don't overcommit. If you can't say No, then say I'll get back to you. Take time to think about what is being asked of you before commiting to it.

The next one was to make use of tools to add productivity - he used to work with a developer who was much more productive than others. On investigation this came down to using tools to generate bits of code, templating etc. In the vb6 world he used a tool called CodeSmart used to produce properties, comments, etc and Source+ 2000 which is a repository of reusable code components. By using tools like this, the developer can auto-generate the simple stuff, and concentrate more on the important and value added stuff.

In a similar vein, learn how to use the environment - invest time in finding out how to use your IDE. A couple of keystrokes to do simple actions can save a lot of time in the long run.

The final one was the benefits of the stand up meeting. A simple round robin of all members of the project team, with 30 seconds per answer only - if more time is needed, then it probably should move to an outside meeting. The questions are:

  • What did you do yesterday?
  • What are you going to do today?
  • What is stopping you?

Session 1: Data-Driven AJAX in ASP.NET by Jeffrey McManus

Jeffrey is one of the developers of document-sharing site approver.com and has a wide variety of employment history with some major companies - including Yahoo Developer Network. His was the most interesting talk of the day, and part of that was down to his interest and passion in the subject.

His first comment was on how good it is that Jquery is going to be included in the stack. It is already possible to use ASP.Net Ajax with JQuery.

He reminded everyone that ASP.Net Ajax is built in to .NET 3.5, optional free install for .NET 2.0

He then gave some examples on when to use Ajax. He suggested the following scenarios:

  • Feedback on long running tasks
  • Provide info to user quickly economically in context
  • Draw users attention to important events in lifecycle of the application
He spoke about the JSON data format, and mentioned about it being a community driven standard. At a high-level it stores Javascript key value pairs. It is a fairly efficient method. ASP.Net Ajax consumes JSON internally but doesn't expose it. There are some free, open source .NET json libraries but Jeffrey hasn't really had any experience of them:All of his demos have been written using the SQLlite database. SQLlite is free, open source and public domain. It needs no installation and no configuration. He described it as being like access without GUI. It is distributed as a small dll - 220k for db engine and ADO.NET provider.

Some ASP.Net Ajax controls demand SOAP so you have to write a SOAP proxy for just about any data you wish to use - this sounds a bit faffy. Where this is the case it uses names of SOAP method parameters and not the method signature (he used prefixText and count for AutoComplete as his example)

The alternatives to using ASP.Net Ajax are:

He mentioned the Open Ajax alliance whch is "is an organization of leading vendors, open source projects, and companies using Ajax that are dedicated to the successful adoption of open and interoperable Ajax-based Web technologies. The prime objective is to accelerate customer success with Ajax by promoting a customer's ability to mix and match solutions from Ajax technology providers and by helping to drive the future of the Ajax ecosystem." The main intention is for various vendors Ajax stacks to not break when used with each other.

His final point was that it is possible to write an HTTPHandler to produce and consume Json. A quick search via google turns up a JsonHandlerDotNet project.

Session 2: Functional Programming in C# 3.0 by Oliver Sturm (Oliver@sturmnet.org)

So what is functional programming?:

  • A programming paradigm
  • Avoids state and mutable data
  • Well known languages Inc haskell, lisp, f#
  • Focus on application of functions
  • Many imperative and oo languages have fp features
  • Functional programming languages have features that support higher order functions, currying, recursion, list comprehensions
  • Any function should only work on parameters passed in, should not persist anything, or use any other variable - anything it isn't supposed to do is called a side effect
Why is functional programming interesting?
  • Promotes modularisation - reuse stuff on a functional level rather than a class level
  • Lazy evaluation --> greater efficiency
  • Target of avoiding side effects has several advantages: scalability, optimization, debugging, testing
  • By only relying on functions with no side effects then you can parallelise really easily. Testing is easier because no external dependencies so don't need mock objects or setup/teardown scripts
His example code made use of the Array.ForEach function within .NET 3.5 to use functional programming in C#. Alternatively, can use lambda expressions which makes the code look cleaner.

Examples of higher order functions:

  • Map (Select in linq) - do something for each element in a list (foreach in above example)
  • Filter (where in linq) - extract elements based on criteria
  • Reduce (aggregate in linq) - summarize elements according to some calculation (ie sum, count etc)
A variation of Select in .Net is SelectMany to allow a select to return multiple things rather than the usual single thing

Functional programming in C# benefits are:

  • Easy to unit test - no side effects
  • Programming for scalability is easier
  • Easy to get things done
  • BUT make sure all of team understands it BEFORE you use it :-)

Session 3: Red, Green, then what? by Gary Short (Gary@garyshort.org)

This talk was all about refactoring, and the title comes from the phrase "Red, Green, Refactor". Gary was up front about the fact that he works for DeveloperExpress, authors of two refactoring tools, but he was very impartial about tools when he was presenting and I think did a great job of being neutral about tools whilst really selling the benefits of a tool, any tool.

What is refactoring?

  • Improving code without changing overall results
  • Improves understandability
  • Usually motivated by difficulty of adding new functionality
Why refactor?
  • Software entropy - a tendency that over time codebase becomes chaotic. Some examples of this are subversion of object behaviour, multiple child objects, algorithms become more specific etc
He introduced a concept called Design debt - over time you are more likely to bodging a solution to get it done because of deadlines etc, over time the amount of times bodges have been made go up until you finally get to a point where can't progress without refactoring so you are forced to refactor - this becomes a high risk area for project. So, refactor more often to keep the risk to the project low.

To refactor effectively use a tool - examples of tools are are CodeRush, RefactorPro, Resharper

So, why use a tool?

  • Reliability
  • Repeatability
  • More confidence which means that you're more likely to refactor more often
Examples of refactoring made easier by using a tool:
  • Changing a method signature
  • Convert to initializer (decompose too)
  • Make implicit/explicit - use of var keyword
An interesting question was raised which is what happens if you're refactoring inside a library and you don't know who consumes it? A library provides a contract so you can't break the interface. So, instead create a new function and mark the old one with an attribute of Obsolete and also hide it from browsability using the attribute EditorBrowsable to use EditorBrowsableState.Never (note, this will only hide the method from projects which reference the assembly and not within the project itself)

The key to good refactoring is good testing - without unit testing it is hard to ensure you haven't broken it.

Session 4: Go with the flow - introduction to windows workflow by Ben Lamb

I missed the very beginning of this session, and so didn't necessarily fully grasp what this talk was going to give me. Ben also mis-timed his talk somewhat and so didn't get to summarize and wrap up as well as I'm sure he would have liked to. The summary for this talk is "Changing business requirements are the bugbear of the application developer. The Windows Workflow Engine (WWF) allows business rules to be modelled in a graphical environment, possibly by a business analyst rather than the developer, making changes easier to accommodate"

He started using an example of an estate agent - they may deal with commercial properties, or residential properties, they may be part of a larger group, or chain. The workflow employed by an estate agent will probably differ between agents, or even between commercial and residential within one agency. So, if you were modelling the design for a system, you'd need to model these differences.

I have a series of notes about this talk, but none of them make a lot of sense to me, and so I don't necessarily think there is much point in me distributing them. It's probably better to look at the Windows Workflow Foundation site, and try some of the Hands On Labs exercises.


All in all I had a good day and attended some good talks given by some excellent speakers covering an array of development topics. The event had more of a DDD day feel than something like ReMix as it was community led rather than vendor led. I would recommend the event and would be interested in attending again.