Cumulative Flow Diagram from TFS Data

One of the teams I’ve worked with frequently ended a sprint with 2 or more user stories “almost” ready. To them “almost” meant less than 2 hours, but in reality, due to sprint planning, task breakdowns etc, we more often than not finished those user stories on day 2 of the next sprint.

There were a number of reasons for this, but to aid in our investigation into why, I used a Cumulative Flow Diagram (CFD) each sprint.

There are any number of ways of creating CFDs and some people are lucky enough to be using a tool that does it for you. Unfortunately TFS, the tool I was using, wasn’t one of them. After some searching I eventually found a great post called Cumulative Flow Diagram – How to create one in Excel 2010

It was almost exactly what I wanted, but because it needed manual data entry, I was manually digging through TFS searching for the right info. That’s not too bad if you remember to do it every day, but I’d often not be able to due to meetings or some other commitment, let alone illness or annual leave.

What I wanted, was a way to extract the information on a daily basis without me having to open a web browser.

Less Manual Approach

Part 1 – What data do I need

This was fairly simple as I realised I need to know:

  • The User Story ID
  • Who the User Story is assigned to? That way I know if it’s in “Development” (not the testers or product owner), “Testing” (one of the testers) or “Ready” (the product owner)
  • The State, which is one of “Committed”, “In Progress”, “Done”

That way, I can calculate the current status of a User Story by the logic in the following table:

State Assigned To Status
Committed Empty To Do
Committed A Tester Testing
Committed Product Owner Ready
Committed Anyone Else In Progress
Done n/a Done

Part 2 – Write TFS Query

Getting the data out of TFS is relative simple. In the web interface, go to Work->Queries, create a new “My Queries” called CFD and fill it out as below, where “Iteration Path” is your current iteration:

CFD TFS Query Configuration

CFD TFS Query Configuration

It doesn’t matter how you configure the column options for this query as we’ll be extracting things via C# next.

Part 2 – Command Line Tool

Running the above query from C# is also simple. The below code clearly has a lot to be desired, but it was a quick 5 second proof of concept and as it’s been “good enough” I’ve never tweaked it.

using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;

using System;
using System.Configuration;

namespace CFDConsoleApp
{
    class Program
    {
        static void Main(string[] args)
        {
	    var tfsServer = ConfigurationManager.AppSettings["TFS_SERVER"];
            var projectName = ConfigurationManager.AppSettings["PROJECT_NAME"];
            var queryFolder = ConfigurationManager.AppSettings["QUERY_FOLDER"];
            var queryName = ConfigurationManager.AppSettings["QUERY_NAME"];

            // Connect to the work item store
            var tpc = new TfsTeamProjectCollection(new Uri(tfsServer));
            var workItemStore = (WorkItemStore)tpc.GetService(typeof(WorkItemStore));

            // Run a saved query.
            var queryRoot = workItemStore.Projects[projectName].QueryHierarchy;
            var folder = (QueryFolder)queryRoot[queryFolder];
            var query = (QueryDefinition)folder[queryName];

            var queryResults = workItemStore.Query(query.QueryText);

            for (int i = 0; i < queryResults.Count; i++)
            {
                Console.WriteLine(
                    "{0},{1},{2}",
                    queryResults[i].Id, 
                    queryResults[i].Fields["Assigned To"].Value,
                    queryResults[i].State);
            }

            Console.ReadLine();
        }
    }
}

As you can see, I’ve placed some stuff in the App.config under App.Settings, but otherwise it’s very simple.

Part 3 – Scheduled Tasks

So I don’t have to remember to run this every day, I set up a scheduled tasks to run every morning at 8am, running the following command to output to a file called the current days date and time.

cfd_console.exe > %date:~10,4%_%date:~4,2%_%date:~7,2%__%time:~0,2%_%time:~3,2%_%time:~6,2%.txt

This way, for whatever time period you want, you just have to use those files. No outlook reminders or alarms!

Part 4 – Cumulative Flow Diagram from TFS Data

When you have a period of time you want to produce a CFD chart for, simply take the relevant .txt files and import the data into the excel template from the above post.

I’m sure this could be automated also, but as it only takes a few minutes every iteration I haven’t bothered.

Summary

The above way is still fairly manual, but it’s quicker than looking at PBIs and Tasks in TFS regularly. There’s also a lot to be desired with the coding, but it serves its purpose!

I could of course make this into an Excel app and possibly automate the whole thing, but for now, it’s more than good enough for my needs. One day maybe that will change, but in the meantime, feel free to take the above and adapt it to your needs.

Finally, if you’re not already creating a CFD for each sprint, I highly recommend you do. It’s a great easy way to explain to both the team and any stakeholders the work in progress and how it affects throughput.

Retrospective Idea – Question All Meetings

A colleague of mine has been commenting on the “cost” of some of the meetings we have each sprint. To be fair, a whole team sat in a room for an hour is expensive so it got me thinking. Lean, highly influential on Agile, has the concept of cutting away everything that isn’t need, so I wondered if our meetings were value for money and if not, what could be cut away.

I obviously didn’t want to dictate my thoughts to the team, so I tried to come to up with a retrospective technique to get the teams thoughts. As usual, it followed the 5 steps:

Question All Meetings Retrospective

Set the Stage

I set the stage as the team investigating “do we get value for money from our regular events”? I introduced the GIFTS acronym with respect to the daily stand-up. If you’re not familiar with this, I first read about it from Martin Fowler in his post It’s Not Just Standing Up. In short, it stands for:

  • Good start – to the day
  • Improvement – chance for the team to highlight possible improvements
  • Focus – for the team for the day ahead
  • Team – as in building
  • Status – so everyone knows what everyone else is doing

Before getting into it, to warm us up, I asked everyone to predict which meeting was going to be the worst and made a note of the results. This was mainly to get everyone to speak at least once.

Gather Data

As I’m in a big team, I split them into 3s and asked them to pick two of our regular meetings and come up with a similar acronym of what we should be getting from them and an opinion of whether we are.

Generate Insights

Each team presented their acronym and opinion of whether we get them. The whole team were then encouraged to discuss whether that was accurate.

Decide What To Do

If there are any perceived problems with the meetings, i.e. we’re not getting the values identified in the previous stages, talk about them and decide what we can do to change it.

Close the retrospective

I closed the retrospective by thanking everyone for their time and asking them all to provide me with some feedback on how they would rate the retrospective.

Result

The retrospective went “Okay” as it didn’t flow particularly well or feel like there was enough material for a whole retrospective. Saying that, we did get some useful ideas and have drastically changed some meetings, so it couldn’t have been totally bad.

To be honest though, that was probably due to my team being very good and effectively rescuing me from a bad retrospective, rather than my skills, but I’m happy!

Overall, I wouldn’t recommend this technique as is to a new Scrum Master or a new team, but I think there’s a good retrospective technique in here somewhere waiting to come out.

Warning

I don’t think you should consider drastically changing or completely dropping one of the regular scrum events. Also, please be very careful to monitor the reaction and changes in the team if you make any.

Basically, not all ideas are good ones, but have the courage to try something “out there” if the team reach a consensus on trying it.

Personal Retrospective

What went well

  • We got some good ideas on how to tweak a couple of the regular events and these changes have improved the teams opinion of them.

What could I have done better?

  • Asking people to make up acronyms in a short space of time didn’t work out particularly well. Maybe just the core values would be better.
  • Asking the teams to pick two meetings was a chance for them to all pick the same things. Assigning them felt wrong, so I need to think of a better way.

What should I not do again?

  • Expect all of my ideas to be amazing first time.
  • Asking the team to warm up with a prediction for the worst meeting was a bit naff

The importance of backlog refinement

Backlog refinement isn’t a prescribed meeting, instead the Scrum Guide states:

The Scrum Team decides how and when refinement is done.

Perhaps that’s why it’s not taken as seriously as the other scrum events, or worse, forgotten or ignored. I’ve lost count of the number of project managers (or “non-agile” people) ask “why are you spending time talking about the PBIs again?”. Maybe it’s external pressure making teams drop refinement, but either way, it’s a big mistake.

The team I’m currently working with have seen some real benefits from backlog refinement, so I thought I’d get my thoughts down for the next time someone asks “what are you doing?”.

What is Backlog Refinement?

I won’t go into depth as to what backlog grooming is as there are numerous guides out there. Instead I’ll use the classic iceberg metaphor, i.e. the backlog has epics and the bottom, almost ready Product Backlog Items (PBIs or user stories) in the middle and some ready PBIs at the top:

Product Backlog Iceberg Metaphor

Product Backlog Iceberg Metaphor

Basically, the meeting is for the team to help the product owner move user stories up the Product Backlog. This raises awareness of the upcoming story and helps the Product Owner break the larger items down into user stories that the team feel could pull them into the next sprint.

As a rule (which isn’t always possible for whatever reason), I like my teams to have a “buffer” of at least 2 sprints worth of Ready PBIs. Any less makes the next sprint planning problematic and any more I find priorities change and/or the team forget the details.

How long is Backlog Refinement?

Again, there isn’t a hard and fast rule for how long this event should be. The Scrum Guide says (emphasis mine):

Refinement usually consumes no more than 10% of the capacity of the Development Team. However, Product Backlog items can be updated at any time by the Product Owner or at the Product Owner’s discretion.

As with everything scrum, it’s about finding a number that is just enough for your team to get the most amount of benefit and no more.

When should we do Backlog Refinement?

The current team I’m working with have tried lots of things. We’re currently have 2 regular events:

  1. An hour every week where the whole team sits down together and the Product Owner guides us through the backlog.
  2. Each team member is encouraged to spend at least 30 minutes each week going through the backlog on their own. (This started off as a meeting set in their calendars but we couldn’t find a time that suited everyone, so the team are trusted to do this whenever it’s convenient).

As long as it gets done, and the team are seeing the benefits, I prefer to let the team decide.

The importance of backlog refinement

There are many more reasons to perform backlog refinement than I’m going to list, but for me, the main benefits are:

  • Increased awareness of upcoming work. This leads to greater team buy-in and morale benefits
  • Smoother planning meetings (important if you have stakeholders attending)
  • Better understanding. The team can plan better, leading to less conflicts and an increased velocity
  • Fewer mid-sprint surprises like “we didn’t realise it was this complicated”.

I’ve read recently how Developers shouldn’t measure twice, cut once and I think for the really excellent teams who know their code base inside out this may hold true. But for the rest of us, a little preparation can go a long long way.

Retrospective Experience – Asking Questions

We’re close to a big release and things have been going well – think 200%+ increase in velocity from when we started the release (and we were good already ) – but our last couple of retrospectives felt a little flat, possibly because the big wins have already been made. I wanted a technique to really get the team involved.

Back to Basic

In my research, I went back to the old favourite Getting Value out of Agile Retrospectives written by Luis Gonçalves and Ben Linders .

“Asking Questions” is aimed at new scrum masters who have never facilitated a retrospective before, so I decided to tweak the format a little.

Tweak 1

Rather than asking the question, I printed them out, cut them up and put them all in a pile in the middle of the table. Importantly, the questions are all folded up so you can’t tell what piece of paper held what question.

Tweak 2

Rather than the Scrum Master asking the question, taking turns, I asked each team member to pick a question from the pile, read it out to the team and then give an answer. The rest of the team were then encouraged to discuss the response.

I’m pleased to say this lead to a lot more group involvement than the last couple of retrospectives and we managed to find a couple of tweaks as well as a bit of forward planning for the next release.

Summary

The book says this is a great technique to use if you’ve never facilitated a retrospective before. The tweaks I made may or may not help this, but it certainly got the team involved and resulted in some great actions.

So another fantastic technique that I would recommend to a newbie or expert Scrum Master, especially if they want fantastic team engagement.

I’d be really interested to hear anyone trying this technique in the comments below or catch me on twitter if you prefer.

Personal Retrospective

What went well

  • Adapting the “beginners” technique to increase team involvement
  • Sitting with the team, rather than standing at the front or walking around the room seemed to increase team participation

What could I have done better?

  • Changed the wording of the questions from “what could you” to “what could we

What should I not do again?

  • Not spell check the questions

SharePoint – Requested registry access is not allowed

Very rare occurrence nowadays, but I spent some time playing with SharePoint. Only 2010, but I came across the following error and a colleague showed me a really useful tip which I’m going to try really hard to remember as I think the approach is applicable to all coding.

Very simply, I couldn’t access the User Profile Service Application page and looking through the ULS logs I found this stacktrace:

Application error when access /_layouts/15/ManageUserProfileServiceApplication.aspx, Error=Requested registry access is not allowed.  
 at Microsoft.Win32.RegistryKey.OpenSubKey(String name, Boolean writable)    
 at Microsoft.Office.Server.Utilities.SetupUtility.get_InstallLocation()    
 at Microsoft.Office.Server.Administration.UserProfileApplication.DumpSynchronizationStatusToFile()    
 at Microsoft.SharePoint.Portal.WebControls.UserProfileServiceImportStatisticsWebPart._StopProfileSynchronizationJS()    
 at Microsoft.SharePoint.Portal.WebControls.UserProfileServiceImportStatisticsWebPart.OnInit(Object sender, EventArgs e)    
 at System.Web.UI.Control.InitRecursive(Control namingContainer)    
 at System.Web.UI.Control.InitRecursive(Control namingContainer)    
 at System.Web.UI.Control.InitRecursive(Control namingContainer)    
 at System.Web.UI.Control.InitRecursive(Control namingContainer)    
 at System.Web.UI.Control.InitRecursive(Control namingContainer)    
 at System.Web.UI.Control.InitRecursive(Control namingContainer)    
 at System.Web.UI.Control.InitRecursive(Control namingContainer)    
 at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)    
 at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)    
 at System.Web.UI.Page.ProcessRequest()    
 at System.Web.UI.Page.ProcessRequest(HttpContext context)    
 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()    
 at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

I did the usual googling and found a couple of suggestions, but nothing seemed to work. My colleague then suggested looking through the actual code and trying to figure out what’s happening. So I fired up ILSpy (or your tool of choice) and eventually found this in Microsoft.Office.Server.Utilities.SetupUtility.get_InstallLocation()

using (Microsoft.Win32.RegistryKey registryKey = 
	Microsoft.Win32.Registry.LocalMachine.OpenSubKey("SOFTWARE\\Microsoft\\Office Server\\15.0"))
{
	if (registryKey != null)
	{
		result = (registryKey.GetValue("InstallPath") as string);
	}
}

So I fired up regedit and found “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\15.0″. Turns out it was there, but the permissions were setup differently to the rest of the nodes. No idea why and as this was my development machine, I rightly or wrongly just set the permissions for “EVERYONE” to read this node via “right-click->Permissions…” and it worked.

regedit permissions

Change regedit permissions

Summary

Using ILSpy to dig your way through code you don’t have the source code for is fantastic. It’s clearly worked in this case, but I’m betting it works in a lot of other situations. It’s something I will definitely try to remember for the future.

Applying Scrum to self-education

I’ve been reading a lot in the last couple of months about self-education and in particular how to overcome procrastination. I realised I was making great plans to “learn X” or “build Y”, spending ages planning it all out but not following through when it came time to actually do it. Sound familiar?

Research

One of the first things I read about was what Scott Hanselman calls Analysis Paralysis or basically over thinking. That label can be applied to the “build Y” things, but surely it can’t be the reason for failing to “learn X”?

I kept reading and found a couple of other ideas, but even when I followed the advice, I couldn’t maintain any momentum I gained. That was until I read “You are not so smart” by
David McRaney, and in particular his piece on procrastination. I recommend you take the time to read that link, but for me it boiled down to taking the decision making away from “future me” so I can’t say “I’ll do it tomorrow”.

The trick is to accept the now you will not be the person facing those choices, it will be the future you – a person who can’t be trusted.

Extra Complication

Like most techie people, the list of interesting things I want to look into is huge. Lists upon lists upon lists. Some people say concentrate on one thing at a time, some say learn several things during the same period. I’m not going to add fuel to that debate, but I prefer a blend of that, i.e. focus on one topic but not for too long.

How?

I thought about this for a while and realised it was all about setting a sustainable pace. There’s no point in “now me” over committing “future me” and causing burn-out. Does that sound agile to anyone else?

Applying Scrum to self-education

I currently “practice what you preach” by doing 2 week sprints, where each sprint alternates between my two main topics of focus. Each topic has a backlog and at the start of each sprint I commit to what I think I can do. I’m not having daily stand-ups with myself, but I certainly run planning sessions and retrospectives.

To make it accessible, I use Trello to manage all of this. So far it’s fantastic and I’ve found I’m able to concentrate on the one area of focus at a time, rather than doing lots of little bits of everything and not really making any progress.

I’d love to hear from anyone doing similar stuff in the comments below, or catch me on twitter.

Is the Scrum Master a disruptive role?

While talking to one of my team during our regular chats, the number of changes I’ve challenged the team with came up. I was glad when he said he could see the value in them, but was surprised when he said he initially saw them as a disruption.

The scrum master role is seen as someone “protecting the team”, i.e. prevent disruptive outside influences from causing internal disruption. So I was initially worried, as that’s the opposite of what I’m supposed to do!

Before I did anything, I was wise enough to check with the other members to get their take on it. My favourite answer was (emphasis mine):

“your job is to push the team and challenge them to do better”

I can see that could be taken as disruptive, so I did some digging. I couldn’t see much reference to the Scrum Master being a disruption, until I came across this http://www.scruminc.com/the-power-of-disruptive-leadership/. It’s quite hard to get the meaning of the presentation from just the slides, but I’m taking comfort in the title.

Personal Conclusion

After a fair bit of thinking, I’ve come to the conclusion that “disruption” is as another tool in my belt. As long as I limit it’s use to when I think the team are stagnating, treading water or could simply do something better, it could be a very useful way to ultimately increase velocity. Importantly, I think as long as I don’t expect all of my suggestions to be acted upon and I’m more than willing to be told I’m wrong, I see no problem with it.

That’s my conclusion, but I’d be really interested to hear other opinions in the comments below or catch me on twitter if you prefer.

Retrospective Experience – Happiness Index using a Niko Niko Calendar

In the last month or so, I’ve been trying to change tactics for how I approach increasing velocity. I think this all started with a tweet from Luis Gonçalves:

As I’m a coder by trade, I’ve been challenging the team to improve our engineering processes as that was what I was more comfortable with. The tweet made me realise that I should spend a lot more effort into the team themselves.

I’d previously read about the Happiness Index technique but was reluctant to try it as I was worried that the team would say “I don’t remember what I was feeling”. As what normally happens, this all slipped my mind for a while until I came across something called a Niko Niko Calendar. I won’t go into the details of how these two techniques work, as the above two links are great, but the following quote leapt out:

Feelings are the fastest feedback I know

I don’t think all of the team were particularly comfortable with analysing their feelings, but I believe it was a very worthwhile exercise to do. At the end of the sprint we ended up with the following chart:

Niko Niko Calendar

Niko Niko Calendar

As a team, we then talked through all the events that caused big changes in happiness and plotted the “average” happiness on a graph.

Happiness Index Graph

Happiness Index Graph

This really helped the team focus on the major events and led to some really great discussions. One of our team even said afterwards “that may have been the best [retrospective] yet”.

Summary

Using a Niko Niko Calendar to capture the teams emotional state at the end of each day is brilliant. The information is clear for everyone to see, which allows other team members to offer help if someone is clearly struggling. As a fellow Scrum Master commented:

A fantastic information radiator

I would highly recommend any Scrum Master to try it out. If you do, I’d love to hear from you in the comments below or catch me on twitter.

Personal Retrospective

What went well

  • Changing the focus from engineering to people was fantastic

What could I have done better?

  • Considered the team as well as engineering a lot earlier, but at least I’ve started!

What should I not do again?

  • I tried to restrict the team to just :) , 😐 and :( in the hope that it would make analysis easier. I think that was a mistake because they didn’t and it didn’t affect analysis at all!

Add Grunt and ESLint to a MVC Project

This is part two of getting started with ESLint using Grunt where I will show you how to configure ESLint to analyse a MVC .NET Project. In part one I set-up our environment with node.js, Grunt-cli and finally Grunt for our project, but you couldn’t do much with it.

In this post, I’ll install ESLint, disable all the default ESLint rules, enable one specific some rule and exclude some files from analysis.

Add Grunt and ESLint to a MVC Project

To recap, I have a new ASP.NET MVC project in c:\myproject\WebApplication1 that also contains a package.json and Gruntfile.js:

C:\myproject\WebApplication1> dir -name
node_modules
packages
WebApplication1
Gruntfile.js
package.json
WebApplication1.sln
C:\myproject\WebApplication1>

Let’s get started.

Step 1 – Install ESLint and Grunt-ESLint

Like last time, npm makes installing things trivial. First, install ESLint:

npm install --save-dev eslint

Once that’s completed, install the ESLint grunt integration:

npm install --save-dev grunt-eslint

And finally install load-grunt-tasks, which saves a bit of typing in a minute:

npm install --save-dev load-grunt-tasks

Step 2 – Configure ESLint

eslint.json

To make our lives easier to change the configuration of ESLint, we’re going to use an eslint.json file. As you can probably tell from the name, it’s a text file containing some json that ESLint parses. The ESLint documentation is pretty good at explaining what all the options are, so I won’t do that here, but for now just create one containing the following:

{
    "env": {
        "browser": true,
    },
	"globals": {
        "$": true,
    },
	"rules": {
        no-undef: 1,
    }
}

This ensures the browser and jQuery ($) variables are recognised by ESLint so they don’t throw false positive. It also enables a single rule “no-undef – disallow use of undeclared variables unless mentioned in a /*global */ block”.

As you will see in a minute, I personally like to disable **all ** the rules, only enabling the ones I explicitly want to use. That’s personal preference, as on legacy systems you can end up with a lot of issues to address which can seem overwhelming.

.eslintignore

The next file that we need to create is .eslintignore. As the name suggests, this is an easy way of telling ESLint to ignore certain files and directories. Again I refer you to the documentation for more details, but for now, create an .eslintignore file containing:

# ignore everything in the packages folders
**/packages

# ignore everything in Scripts except files beginning with "myapp"
**/Scripts
!**/Scripts/myapp*

This tells ESLint to ignore all files inside the packages directory, i.e. anything you’ve got from nuget. The last two lines ensures all files except those following your applications naming convention – you have a naming convention right? – are also ignored, i.e. jquery..min.js etc.

Finally, all that’s left is to configure Grunt to run ESLint.

Step 3 – Configure Grunt to use ESLint

Before explaining the syntax, please edit your Gruntfile.js file to contain:

module.exports = function(grunt) {
	# section 1 - require modules
	require('load-grunt-tasks')(grunt);

	# section 2 - configure grunt
	grunt.initConfig({
		eslint: {
			options: {
				config: 'eslint.json',
				reset: true
			},
			target: ['WebApplication1/**/*.js']
		}
	});

	# section 3 - register grunt tasks
	grunt.registerTask('default', ['eslint']);
 
};

The more you play with Grunt the more familiar this will be, but it’s basically made up of 3 sections. Section one lists any requirements (“require” calls), section 2 is where you initialize Grunt and section 3 where you register tasks.

In this instance, I’m configuring a single “target” called “eslint” and telling it to use the eslint.json file, turn off all the rules (reset: true) and to search for all JavaScript files inside the “target”.

Finally I register the “eslint” target to be the default task. This simply means I can execute “grunt” instead of “grunt eslint”.

Which if I do that, I get:

C:\myproject\WebApplication1> grunt
Running "eslint:target" (eslint) task

WebApplication1/Scripts/_references.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/bootstrap.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/bootstrap.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery-1.10.2.intellisense.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery-1.10.2.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery-1.10.2.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate-vsdoc.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate.unobtrusive.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate.unobtrusive.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/modernizr-2.6.2.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/respond.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/respond.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

? 14 problems (0 errors, 14 warnings)

And that's it! ESLint is now analysing the JavaScript files in my MVC project.

Step 4 – Next Steps

If you’ve got this far, you’re set to go. You will definitely want to edit the rules you’re using, but I’ll leave that up to you.

Please leave a comment below or catch me on twitter if you’re having any problems.

Getting Started with ESLint using Grunt

Getting Started with ESLint using Grunt

I recently had the chance to add ESLint to our workflow. I considered using it standalone, but as Grunt is becoming a first class citizen in Visual Studio 2015, I wanted to get more familiar with it now.

This is the first of a two part guide to getting started with ESLint using grunt. I’ll also help you understand how to enable other ESLint rules as well as include/exclude files from analysis. I’m going assume you have little to no experience of node.js, Grunt or ESLint and are running windows. In this post I’ll cover the setting up of your environment and next time we’ll go about installing and configuring ESLint to work on a ASP.NET MVC project. This guide could also be used to add ESLint any project that contains JavaScript or even a different operating system, but I haven’t tested that, so won’t make any promises.

Once we’ve finished, you should be able to type “grunt” from a command prompt and ESLint will analyse your JavaScript, so let’s get started.

Step 1 – Install Node.js

Installing Node.js, which includes npm (node package manager – think nuget for node), is as simple as going to the Node.js homepage, clicking the “Install” button and executing the .msi file that downloads.

Once the install has finished, open a command prompt and type “npm”. If you’re presented something like this, you’re good to go. If you have any problems, let me know in the comments.

Usage: npm <command>

where <command> is one of:
    add-user, adduser, apihelp, author, bin, bugs, c, cache,
    completion, config, ddp, dedupe, deprecate, docs, edit,
    explore, faq, find, find-dupes, get, help, help-search,
    home, i, info, init, install, isntall, issues, la, link,
    list, ll, ln, login, ls, outdated, owner, pack, prefix,
    prune, publish, r, rb, rebuild, remove, repo, restart, rm,
    root, run-script, s, se, search, set, show, shrinkwrap,
    star, stars, start, stop, submodule, t, tag, test, tst, un,
    uninstall, unlink, unpublish, unstar, up, update, v,
    version, view, whoami

npm <cmd> -h     quick help on <cmd>
npm -l           display full usage info
npm faq          commonly asked questions
npm help <term>  search for help on <term>
npm help npm     involved overview

Specify configs in the ini-formatted file:
    C:\Users\Matthew\.npmrc
or on the command line via: npm <command> --key value
Config info can be viewed via: npm help config

Step 2 – Install the Grunt Command Line tools (grunt-cli) globally

Getting Started with ESLint using Grunt

The next step is to install the Grunt Command Line tools globally using npm. Thankfully that’s as simple as typing the following in a command prompt:

npm install -g grunt-cli

Step 3 – Prepare your project for Grunt

For grunt to run on our project, you need a “package.json” file and a “gruntfile.js” file.

package.json

You could create this file by hand, but npm can talk you through the process, so in the root of your project type:

PS C:\myproject> npm init

You will get asked a series of questions, which if you don’t know the answer to, just hit enter to skip it. The file is editable, so can be changed later if need be. Depending on what you answer, you’ll end up with a package.json file containing:

{
  "name": "npminit",
  "version": "1.0.0",
  "description": "Description",
  "main": "index.js",
  "scripts": {
    "test": "test"
  },
  "author": "Matt Dufeu",
  "license": "ISC"
}

This file will also be automatically updated by npm when we start installing other packages, as we’ll see in step 4.

Gruntfile.js

Gruntfile.js is similar to a makefile (showing my age) and is used by grunt to see what to do when you issue grunt commands. To get started, create a file that contains:

module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json')
  });

};

Don’t worry about what this means at the moment, we’ll be modifying this later.

Step 4 – Install Grunt locally

The final setup step is to install Grunt locally. Again, npm comes to the rescue, but this time we specify “–save-dev”.

PS C:\myproject> npm install grunt --save-dev

This will do two things; firstly, it will create a folder called “node_modules” to your project. This is basically the folder npm stores all the packages for this project, so think of it as a libraries folder.

Secondly, it will install Grunt as a dependency to your project. Take a look at your Gruntfile.js now and you will see a new section like this:

...
"devDependencies": {
  "grunt": "^0.4.5"
}
....

“devDependencies” simply lists the packages the project is dependent upon and their version number. It gives you the option of moving the project to a different location without the node_modules folder and typing “npm install” to get npm to download the required dependencies.

Step 5 – Verify it’s working

To check that’s working as expected, simply execute grunt, and you should see the below. It’s basically saying “there’s nothing to do”, but at this stage that’s expected.

PS C:\myproject> grunt
Warning: Task "default" not found. Use --force to continue.

Aborted due to warnings.
PS C:\myproject>

Summary

We’ve installed node.js, npm, the Grunt command line tools and setup our project.json and Gruntfile.js files. We’re now ready to start adding ESLint to our project, so next time, I’ll be adding ESLint, showing you how to enable/disable ESLint rules and finally exclude certain files from analysis.

Please leave a comment below or catch me on twitter if you’re having any problems.

[Update: part 2 is now available.]