Wednesday, 24 April 2013

How to assess user experience

Thu 7th March Tobias Komischke
Aims:
  • Suitability for task (e.g. Amazon = 15 tabs to get to the input box!)
  • Self-descriptive
  • Controllability
  • Conform to user expectations
  • Error tolerance
  • Suitability for individualisation
  • Suitability for learning
Example of proving UI was improved using a radar chart:

Avoid:
  • Non-standard UI controls
  • Inconsistency
  • No perceived affordance
  • No feedback
  • Bad error messages
  • No default values and bad cursor focus
  • Dumping users right in (completely empty OR really full)
  • No workflow support
Other UI considerations
 
Screen clutterAesthetics / human factors - clutter limits the brains ability to process info
Text legibilityCharacter sizes - well known optimum + minimum text sizes for handhelds / monitors / projection walls.
Hard to do for different form factors
Text orientationDon't use Marquee
Rotated right slightly better than left.
Horizontal is best
Colour contrastTools available on the web
Colour blind = 8% of men, 0.43% of women.  Mostly red/green
RAGs: Use both colour / shape and icon
Text alignment in menusLeft align is best (for English)
In forms: align text next to the control (proved using eye tracking)
top align is also ok (benefit is language translation when the text size will vary)
Bad = bottom right or in the input box (info has gone once input + watermarked text is hard to read + presume it's already been input)
Visual structure and flowGrid structure and flow
Improve readability, quicker learning
Standardisation reduces design
Add grid lines to measure conformance
OrientationWhere am IWhere have I come from
What's next
Icon qualityConcreteness = resemblance to real world counterpart
Complexity = Richness of details depicted (want low)
Semantic Distance (want low)
Data visualisationBar charts are best
Volume control worst (humans can't spot area changes)
Visual attentionWhere do people look?
Eye tracking is costly
OR Saliency modelling - use website "Feng GUI" to simulate where people will look first.  1 free heat map per day.
Usability testingReal users
Real tasks
Prototypes + real products
Thinking aloud
Qualitative + quantative data
7/8 people will find 80% of usability problems.  Diminishing returns.
Trend towards remote UI testing - easier + better tooling these days

Design for testing

Thu 7th March Kevin Jones
Our goal is to produce flexible, maintainable, defect free code.
 
Code:
  • Automate the build and deploy process
  • Don't leak UI in to domain layers
  • Don't leak DB access across layers
  • Define new types for each concept even if they are small
    • e.g. Money type rather than using a double.
  • Build facades
    • Amalgamate behaviour in to a new type
    • Hide complexity of underlying objects
  • Single responsibility principle
  • Using concrete types is often a bad idea
    • Code to interfaces
    • Can replace real objects with test objects
    • Can use Dependency Injection
    • Use of 'new' means we are tied to an implementation
Test:
  • Tests should be isolated
  • Unit tests shouldn't use system (file / DB etc.)
  • Test names should be descriptive (use the BDD style)
    • Given_When_Than()
    • e.g. GivenAUserService_WhenIAskForAllUsers_ThenAllUsersAreReturned()
  • Arrange -> Act -> Assert
Dependency Injection
  • Parameter injection
  • Setter injection
  • Constructor injection
  • Use an IoC container (e.g. Unity / Castle / Ninject / AutoFac for MVC)
  • IoC (Inversion of Control) is a principal, Dependency Injection is a consequence of IoC

Tuesday, 2 April 2013

Living happily with the GC

Wed 6th March Richard Blewett

This lecture was one of the more practical ones and mainly focused on using PerfMon to show how the GC works when various bits of code are run.  I'm not going to give a description of how the GC works, instead I'll concentrate on coding changes we can/should be making.

The main goal is to collect as much memory in Generation 0 as possible since Generations 1/2 are around 10 and 100 times slower respectively.

It's worth noting that as of .net 4 you can choose Asynchronous garbage collections for Generation 2.

There is a stage in garbage collection called Finalization which happens in case Dispose wasn't called on an object.  This is run using the Finalizer thread which is slow and its timing is unpredictable.  Simply by adding a destructor to a C# object the number of objects allocated per second drops by about a factor of 15.  If you need to remove references to unmanaged objects (e.g. icon/filestream/sqlconnections) consider using SafeHandle instead.

This example looks strange but helps if a GC happens half way through the Load function:
    
    void LoadData(string s)
    {
        this.data = null;
        this.data = XDocument.Load(s);
    }

Actions:

  • Run load tests with PerfMon garbage collections switched on.
  • Look at our use of destructors
  • Consider using SafeHandle instead of IDisposable
  • Profile to see what the current state of play is w.r.t memory usage / GC time.