How to Quickly Analyse a Project with NDepend

Whenever I takeover a legacy project, the first thing I try to do is to see what I’m getting myself into this time. 🙂

It’s been a long journey for me in the world of bits and pixels so I have my own scales and “Reserved Words” like Thom Holwerda here;


While it may be enough for you an your buddy who is next to you analysing the same code, it is not enough for professional communication.

It’s easier to understand this report below. I managed to get a lot of refactoring / rewriting budget with this graph during the years.

As title hints, this is one of NDepend‘s graphs and believe me this looks high level and quite easy but it is a life saver. They updated 2019 version with really nice features and it’s working quite fast with large solutions.

I tried it with a quite big 10+ years solution that has millions line of code and 700 projects. Full analysis took 3-4 minutes with 16GB RAM and a 450mb/s SSD and the good thing you don’t have to load anything after. Once it finishes with the analysis it’s really fast. I didn’t have much time to play with the console application but it seems with this speed I can use it in my CI after every commit to see how much the quality affected.

So let’s talk about some tips and tricks about doing an overview analysis and how to plan ahead.


The tool NDepend is a paid software. You can check the prices here.

But it has a 14 days trial period and with full features. That is enough to check if it’s working for you. You can download NDepend here.

I used DNN – DotNetNuke code as an example to provide these analyses. You can check and download the open source project here.

The download procedure is straightforward, NDepend comes as a small zip file. You don’t even need to install anything. Just extract the zip to a folder and start using it by clicking on VisualNDepend.exe file.

It has a VS Extension but I personally try to keep my VS as clean as possible so I haven’t tried that one.


When you click the executable the main window looks like this.

If you create a project, you can continuously check the differences with each iteration. If you don’t, you can just do a One-Off analysis. This looks trivial but check closer, you even have an option to Compare 2 versions of a code base. So there is a lot jewels in this tool to cover depending on your workflow.

After selecting One-Off Analysis from the left, you select the solution .sln file or .csproj file to analyse. This brings up a window with all the assemblies in that solution.

I selected All here, to see what we’ve got. If you select Build Report, it will create a web folder with a large report. You can share it, zip it etc.

The analysis is really fast. This time NDepend nailed it. Larger projects don’t even get 5 minutes to analyse.

You get a web report like this. But I always start my analyses with the Desktop Application. You can get the same interactive report in the app.


Then on the desktop app when you select to go to the Dashboard you get this screen. Yes it’s a little bit intimidating. But this is an overall view with some statistics.

On the left you see your class browser. That does nothing relevant on this dashboard screen. It’s a part of GUI.

Checking the Dashboard, you’ll see that it basically scores your Technical Debt and gives you a rating. It even estimates man/days to get to the next score level. I find this feature useless, but maybe it can be used to compare efforts and have the fastest and most effective selection. I have never used it.

Remember this is a one time analysis so the graphs on the right and bottom are not relevant for this example. It created the project in the temp folder and it will be discarded later. If you create a project and save it and reuse it to analyse your solution periodically, then these graphs start to make sense.

For my quick analysis, on this screen I check only Quality Gates, Rules, Issues briefly and then I go to the details directly from the tabs at the bottom of the dashboard.


First thing I check is the Metrics screen. I believe this gives me the most insight. Here is how to analyse it;

  • First check how the image is seperated into squares with grey borders. Some of them are large, some of them are small. This shows how much code you have in that project. You can seperate them easily because they have titles as project name. Let’s call them “Project Squares”
  • The next thing is the gradient colored small squares inside these “Project Squares”. These represent whatever you select from the combo box on the top of the screen saying “Size”. You can select;
    • # IL instructions
    • Cyclomatic Complexity (CC)
    • IL Cyclomatic Complexity (ILCC)
    • IL Nesting Depth
    • # Methods
    • # Overloads
  • For short analysis I always check Lines of Code and Cyclometic Complexity. For me these are the best filters to have an overview of the solution.
  • Also you can select how deep you filter from the Level combo box above. The default is Method but you can select namespace or assembly to have an overall view. I leave it as it is because we already have “Project Squares”
  • If you have a lot of complexity on a method, the color will be red. You can select the intensity against the filter you’ve selected on the right. This is actually nice. Because maybe you are only interested in the most complex code to fix. Just increase the intensity for red color to your liking. Now your filters will change color and you will have fewer targets. Now you know where to refactor, where to attack first. Instantly you have a plan to increase code quality after few minutes.

So we talked about Cyclometic Complexity and Lines of Code. I know that you are thinking LOC does not mean anything. But remember we are doing a preliminary analysis to a DotNet project we have never seen before. I found LOC helpful for tackling too large assemblies. If you are constantly changing and deploying things in that project then you might want to consider seperating large monolith projects into many small NuGet packages. Which can be developed, tested, deployed separately.

Cyclometic Complexity is a way to quantitatively measure how is the part of your selection is complex. This is mostly used for a method, or a class. This topic goes really deep for a better understanding you should google it. But basically if you have too many “if – switch” statements, “for loops”, method calls etc. basically “Too much flow control elements” in a function, that makes it hard to read, hard to maintain, hard to reuse, hard to test. You need to fix these things to improve your code quality, the stability of your system and also the sanity and productivity of your developers.

I always believe in this quote:

“To go faster sometimes you need to slow down first”

We used this motto in many projects and it was worth the investment.

After playing with this report and having things sorted out here, now you have an understanding of the solution in hand and you know where to target first.

If you click the small squares here you will be directed into Visual Studio and to that function. It’s quite nice to have this functionality. You are connected to your code and it’s a full loop. Just check few methods and see the code quality, coding standards, magic numbers in the code etc.

Dependency Matrix

The next step I check the Dependency Matrix tab.

This is a representation of namespaces on the columns and rows. You can drill down these names to the class member detail. Here you will see how many dependencies a namespace/class/member has. You can check which modules are the most used ones. What is your core functionality, how many references it has.

One of the worst things you might get is your backend code referring to your UI code. But you can detect those kind of dependency issues on this report easily.

Even worse, you can have circular dependencies;

circular dependency means that these are no longer two independent projects (because there it is impossible to build only one of them). You need to either refactor so that you have only a one way dependency or you should merge them into a single project

I also found another use of this report; When you decide to move things around. Remove a 3rd party assembly from your solution, it helps you to overview the dependencies and work to be done. So you can use it in your routine refactoring cycle to quickly analyse the effects.

Rules and Quality Gates

You have a section on the bottom of the screen for the Project Rules

You can use bottom part to filter rules like “Avoid methods too big, too complex” or “Avoid types too big” and see the details on the left. You can click and go to code or right click and check these classes in the dependency matrix if you want to continue visually. NDepend even has a query section that you can use linq to filter your rules. You can check the official documentation here. It’s fantastic but I never needed it because when I go over a level of quality and when it’s maintainable, I rewrite that section of the solution with TDD, so it’s already using all the best practices.

Let’s go back to topic. I found an easier way to analyse the solution quickly;

If you go back to the HTML Report, you will see a section named “Rules summary”. This is a section that shows all the violated rules, click to the title and see the details.

Also there is another section called “Quality Gates summary” in the same report;

This one shows the “Critical Rules Violated” and “New Blocker / Critical / High Issues”. When you click one of these you will go to a more detailed report page;

So now you have all the information you need to know about the quality of your project. You have an overview where things might go wrong, you even have some SMART actions to take. If you’re used to work with NDepend like I do, it only takes 30 minutes and for me it saves hours.

There is more to it; You can put your tests and test coverage reports into NDepend and have more correlated information about the parts that require testing and their quality, dependencies etc. in one report.

You can check all the NDepend walkthrough videos here. There is a lot more to discover about this tool.

Note to myself: Learn to use the NDepend CLI and try to integrate it with my Continuous Integration.


This is how I quickly analyse a project when I take over. I do the same things periodically to compare the results to see the quality improvements.

What I like about NDepend is;

  • It’s visual, I can share my findings with non technical people
  • Latest version is blazingly fast. Works great with the largest projects
  • Closed loop, you can browse your code and see details while analysing it
  • It comes with a small package. You don’t even need to install it
  • Integrated with Test Coverage results
  • Integrated with VS 2019
  • These guys are there from the very beginning of DotNet with a great tool and I support them

Happy coding all 🙂

Writing Testable Code in C# Chapter 1 – Constructors

C# has many toys when it comes to constructors. Classes can have many types of constructors;

  • Default constructor
  • Parameterized constructor
  • Private constructor
  • Static constructor
  • Copy constructor

This topic is not about getting to know C# constructors but how to use them to write testable code. Now it is time to go check these constructor types if you don’t know these.

Basically a C# class has a default constructor generated by the compiler if you don’t provide one. You can have many parameterized constructors like function overloads and parameterless constructor in the same class. Copy constructors are used to pass the same type to another instance as a constructor parameter. Private constructors only exist in classes that you cannot create instances. Static constructors are the only interesting ones in the lot, they don’t take access modifiers, there can be only 1 static constructor to a class, they can’t have parameters, they can reside with default constructors.

I can hear you saying “I thought we were testing specs and functionality who cares about constructors?” You are right. When we do unit testing we are using a fundamental pattern called AAA;

  • Arrange
    In this step, we are setting up our test environment. Maybe creating some variables, some classes, if necessary some mock objects.
  • Act
    In this step, we are executing the test by calling a function most of the time.
  • Assert
    In this step, we are checking if the output or the state changes that is caused by the Act satisfies our expectations.

So the Arrange step is directly relevant with constructors because we are creating instances. Passing constructor parameters. Acknowledging this and some simple rules can help you writing testable code. Even if you are planning to add tests later, your design and implementation should allow testing the behaviour easily. Here are the examples you should avoid;


Using “new” Keyword In Your Constructors


Lets start from the most common and easy to solve code smell;

We have a PathFinder class that finds texts / paths in a large xml. We decided to put the instantiation of the XMLReader into PathFinder. We want to test the Find logic.

As you see below we can test the Find logic perfectly. Everything is great.


Actually No! To test Find logic we instantiated a “new XMLReader()” in our PathFinder constructor. Everytime we try to test the Find logic we have to create this class instance too. Maybe today XMLReader doesn’t do much in it’s own constructor but somebody might change its behaviour and it may take longer. Even worse, it might be throwing an exception and that is another behaviour you should consider in your tests.

Also now you are stuck with XMLReader in PathFinder. You can’t load xml from another source or with an other implementation.

Unit testing implies testing of a Unit as the name implies. We are testing two units at this case.

ProTip: Unit tests should be so fast, and repeatable. You should be able to run them every time you change a bit of code. This way when you break something  you can see what is failing and go fix it quickly rather than searching where you have failed in a long code or a large project. It’s all about isolating the functionality that is being tested. There are tools for every language running tests in background while you are coding and show you an interface.

How to make this code testable;

Polymorphism and Dependency Injection (DI) are your tools here. I know every unit test / tdd blog starts with them but be patient, I will show you more details in the following sections.


Created an IDocumentSource interface and changed the behaviour of XMLReader according to this. Now I don’t even need to have XMLReader class to test PathFinder.


Changed the PathFinder constructor. Now instead of creating an instance it accepts an injection as constructor parameter.

ProTip: You guessed it right. This is Dependency Injection (DI). It is not something complicated, we just pass documentSource as a parameter.

Also we use polymorphism; we use IDocumentSource interface as parameter instead of the concrete class. Now we can pass anything that satisfies that contract. We can pass mocks, stubs, fakes or the actual concrete class. Really helps us to isolate the functionality that we test.


Updated the test so that we return a dummy document using a Mock object instead of loading from large XML with the concrete XMLReader class.

Setup the mock, so that we are sure in PathFinder we call the LoadDocument and get the Document after it is loaded.

Now we have an extra line for actually reading document. We moved this behaviour to an upper level in callstack. This means our code is more flexible now. We can have one PathFinder and read many times.

Yes we have a little bit more code here but you will see the advantage. Note that this was for sake of providing an example. In real life the components are bigger and the amount of code is likely to be the same because the setup of the test is straightforward.

Now we have set the basics, let’s move forward with the next example.


Constructors Calling Static Methods


In your project you decided to create some static fields. When used for these reasons it makes testing hard..

  • singletons
  • shared resources like in-memory caching
  • utility classes

When you depend on these excessively, these classes become a burden when your project grows. It is ok to use Extension or Conversion methods as long as they don’t have side effects and are composed. You can test those individually. Other than that using static variables represent global state and it is really hard to test. That is a very common code-smell and you should be careful with those.

ProTip: For most of the hidden states in large classes, we extract those into other meaningful classes. This way we can test them separately also we can use those to have mocks/stubs to test the main classes functionality. It is a technique that we use a lot when we refactor to more testable code.

Let’s say we have a Counter class and another Class1 calling this Counter method in its constructor.


It looks simple. Every Class1 that we create gets a new Index number starting with 1.

Lets try to unit test this behaviour.


Yes, as you have guessed we got errors.



Remember these TDD / unit test rules

  • Tests should be fast
  • Tests should be repeatable at any time
  • Tests shouldn’t have state so that we can call them parallel (this is also true for test fixtures)
  • Tests should not depend on each other to run
  • Tests should be as simple as possible, preferably asserting one thing

In this example our tests can’t run in parallel and they are not repeatable. Should we implement a Reset method in our Counter class just to test it? What if in the future we run our tests on a fast Bamboo server in parallel with 50 more tests addressing this Counter behaviour? Between Reset and Increment methods you can have parallel tests Increments. Then you have wobbly test results. So the answer is No!

What is the testable approach here?

In our Class1 specific unit tests, we are testing the behaviour of Class1 only. Knowing what to test and what is not in the test scope is really important. In our example, calling Increment and checking the Index is 1 but the Index is not important for us. Currently we are doing these tests for just code coverage. The main target should be checking if the Increment method of the Counter called is in our constructor. In the future when somebody deletes this code accidently, our test should fail.

If we change the Static class to non-static, now we can inject a copy of Counter class into Class1. Yes, I break things by converting a static class to non-static. But other classes were dependent to the implementation of Counter. Now they don’t care if the injected reference is defined static somewhere upper tier in architecture. You can now decide how to instantiate Counter. Maybe when your project gets larger, you decide to have an IoC container. C# has abundant list of IoC containers. Then get rid of static copy of Counter and have it as a singleton. You have many options. Do you recognize how writing testable code changed your architecture in a better way?


 Now we can mock the injected ICounter and test if it’s called in the constructor.


We did it! We could use any number instead of 1, 2 and assert it because we are independent of Counter implementation. Remember we are testing the unit behaviour of Class1 here. If somebody decides to change Class1’s constructor in the future, these tests will assert the usage is still there.

Pro Tip: We have extended the testing strategy here. Sometimes we don’t even need to use assert directly in our test AAA structures. You don’t have to. When you get used to unit testing, you will see that AAA structure blends into SetUps, TearDowns, explicit Mock behaviours. You can see the MockBehavior.Strict and VerifyAll() in the TearDown. That is already an assertion.

Now our tests pass.



Constructors Having Business Logic


I have read somewhere “Do not put any business logic or initialization code into constructors, only set parameters”. That looks very strict right? Lets elaborate on this a little bit.

There is no “right” way to do things in C#. There are many debates of constructors having logic. You can write business decisions to your constructors. Some of the publicly recognized C# patterns are executing decisions in their constructors. But even if they are proven to be good, that doesn’t mean they are testable.

Also initialization may still have logic. Maybe in your constructor you try to connect to a database. At first this design seems error free but that is an external domain to your application scope and it may throw errors. You would like to catch those and maybe throw something else, or handle gracefully or retry connecting. But testing these are also hard if they are in constructors.

ProTip: I simply recognized recently, constructors are not named code blocks. I mean you can’t name them properly. A constructor doing xml file loading should be renamed as “LoadXmlFile” or a constructor checking database connection should be named as CheckDbConnection right? That looks like a function then. Remember most of our work is naming things.

In this example we have a UserRepository with a constructor accepting cacheEnabled parameter as Boolean.


There is a business logic in constructor. If the cache is enabled, it decides to get all users from database into cache. If we connect this class to db and actually get users instead of returning null, it will work without any problems.

Lets plan some basic unit tests;

  • Test1 – If cache enabled, all users should be loaded into cache
  • Test2 – If cache disabled, there shouldn’t be a database call in constructor
  • Test3 – Get single user from cache
  • Test4 – Get single user from database

At this moment we can come up with more tests about this class’ behaviour. I tried to write two of the tests above with no avail;


At first test, there is no way to assert if GetAllUsersFromDb() is called or not. (Unless we add a public Boolean state as a backdoor and set it to true when it’s called or try to use reflection to check if this function is called)

At the second test, there is no way without connecting to db and getting all the users into cache. Remember what we said about tests being fast and repeatable. This test might take many seconds and also in a developer machine maybe it is impossible to cache because of large database.

With this approach if we had more constructor parameters, we wouldn’t know about the method calls and combinations inside the constructor. Even if we had the states for all the behaviour it is really hard to be sure. This code doesn’t look like testable and prone to future errors.


To test a closed system behaviour we need a defined cause and a measurable effect.

Let’s say we had a TV. We pressed the button on remote and TV is on now. In between these two observable things, IR message sent from remote, TV received it, then many parts in TV started working (power supply, tuner, logic board, lcd etc.) as a result we got the picture and sound. We don’t know what magic they are doing at the background. But we have a cause and an effect.

If we are testing the integration, we test all these aspects acting together end to end. For unit testing we need to isolate every unit and test them separately.

In our example, what happens the constructor throws an error? Or maybe we want to change the cache mechanism or database connectivity. How do we change these without breaking UserRepository unit business?

Here are some architectural problems;

  • We are violating SRP – Single Responsibility Principle. UserRepository shouldn’t run as a caching engine. This is not the business ruleset of UserRepository.
  • Cache mechanism is bound to Repository, there is no way to change without changing Repository.
  • There should be a way of separating database engine, cache engine, UserRepository wrapper and actual data.
  • We need isolation of all these separate responsibilities so that we can test them individually.

ProTip: Single Responsibility Principle is the “S” of the S.O.L.I.D principles. It means every module or class should have responsibility over single part of the functionality. As Robert C. Martin explains with reasoning: “A class should have only one reason to change”. All of the SOLID principles should be applied to our daily programming and testing practices.

As I stated at the beginning current implementation may run without problems. But even if this doesn’t look like a problem now, when your project gets larger, refactoring and changing the code that is not tested is risky. Unnecessary dependencies may block your future maintenance or increase your effort.

Here is how to make it testable;


A lot of change required for those architectural problems. Lets see what we have done;

  • Now database engine and cache engine are different classes. We don’t test them here so we don’t need to decide them now. We only need to focus on UserRepository behaviour.
  • UserRepository is merely a wrapper over database engine and cache engine. It’s code is now more simple. Remember to “Keep it simple, stupid”
  • No constructor code required. Cache engine data should be handled outside UserRepository domain. UserRepository is no responsible to fill the cache data.
  • We only set parameters in the constructor
  • Now our code is testable

Check the final UserRepository code here. It looks more complete. No gray areas, no comments, no todos, just simple. We don’t have to try to implement and decide large behaviour here so the changes we did drive us to a finished, working, testable code.

Protip: A factory pattern might be used here. A factory is a way to create the object without exposing the creation logic. It’s responsibility is to compose the object which supports Single Responsibility.

Here are some tests;


The details you should check here are;

  • We set the mock objects strictly. SetUp method runs before every test.
  • TearDown method runs after every test so we verify the mock behaviour. If any mock setup is not satisfied we get a warning.
  • Now it is easy to test that database not calling GetAllUsers on creation of UserRepository.
  • Also setting up mocks for database engine and cache engine is easy. For example; If cacheEngine.GetData method is not called during execution we get a mock exception and test fails.
  • AAA structure is blended into refactored test code. When we are Asserting we are also Acting (Assert … repo.GetUser)
  • There is no Assert line at the first test. Instead we verify GetAllUsers never called inside new UserRepository.



In this chapter we had a sneak peek of how to use constructors while writing testable code with C#.

In short, we should be careful with these code-smells;

  • Using “new” keyword in constructors
  • Calling static method in constructors
  • Setting hidden state in constructors (either static or not)
  • Logic code in constructors (also loops should be carefully investigated as code-smell)

What we can do about it;

  • Single Responsibility Principle rules should be checked. The code-smell mostly is at constructors
  • Separate and isolate those large classes so you have simpler, testable constructors
  • Think about if the class is a data class or a logic class. Separating those concerns will let you have clean constructors (and better design)
  • Move the hidden state to external classes
  • Dependency Injection is your friend
  • Keep It Simple Stupid (KISS)

These are general guidelines to follow at the beginning of your unit testing adventures. Of course, there are asides and exceptions. Remember there is not a single way to do things in C#.

You can download all the example code from this link.

Happy Coding 😊

Serkan Berksoy


I hereby thank my friend and colleauge Bas van der Linden for reviewing and making suggestions.

Writing Testable Code in C#

Testing is a method used for demonstrating that software satisfies its specification. It is a methodology to prevent faults being introduced when maintaining software. In the sixties this was done manually by e.g. labour-intensive debugging. Preventing faults in software with tests gained traction after 1990 historically.

As engineers we are always in search for better and more optimum solutions to our craftsmanship. while there are many alternative ideas around, the mainstream is having a testable software. Having unit tests is increasing the quality of software in many ways:

  • Helps us find the bugs earlier
  • Provides documentation on how the unit is intended to be used
  • Improves design
    • Simplifies maintaining and extending features
    • It forces your code to be more modular
  • Brings up clarity for developers, makes you really understand your code
  • Simplifies refactoring
  • Simplifies debugging and mostly removing the need of debugging
    (coding with relying on debugging is a bad habit)
  • Makes collaboration easier and more efficient
  • Removes the fear of change

It is also good for planning work if you can trust your code. Even better if you are developing TDD you are adding a lot of unit tests.

So now we got the basics. Having tests is good. We are telling every developer to test their code to increase quality. But the problem is that we don’t have a common definition of how to write testable code. Coming from more traditional coding styles and trying to adapt into unit testing is a delicate process. You need to change your mindset. It is challenging because some of these principles prohibit you using some of the language features.

In these series I will try to cover the principles of writing testable code with some examples.

You can download all the example code from this link.

Here are the chapters;

Adding a default value node with XSLT

Hi all,

Today I was introduced a problem with an old technology XSLT. 🙂
I realized that I forgot all about XSLT because I wasn’t coding with XML and XSLT since JSON is a thing. I think it was around 2008-2010.

The problem is;
– We have 2 nodes that customer can use. defaultX, defaultY
– If the XML has these nodes, use their value in output.
– If those are missing, introduce a default value.

There is no find, copy, delete methods in XSLT like a coding language so we need to make a work around.
Basically XSLT is all about using templates within templates right?
Here we go.

First you gotta have an identity template. That’s everyone agrees on.
What an identity template does is, copy everything from xml into your output.
It looks like this;


<xsl:template match=”@*|node()”>
        <xsl:apply-templates select=”@*|node()”/>


Then you have to have another template just runs in the apply-templates of previous one.


<xsl:template match=”/*”>
        <xsl:apply-templates />
        <xsl:if test=”count(defaultx) = 0″>
            <defaultx>||| DEFAULT VALUE FOR X |||</defaultx>
        <xsl:if test=”count(defaulty) = 0″>
            <defaulty>||| DEFAULT VALUE FOR Y |||</defaulty>


What am I doing above is matching all items in root and copying them all again into my new output. While I’m doing this, I am testing the counts for defaultx tag. If the count is zero, it means in the previous output this tag does not exists. I am adding my own defaultx tag. Doing it for the defaulty tag.

So XML and XSLT is not dead yet. For micro services and web development we use mostly JSON but for old school banking and windows apps we still use XML files. (Linux apps is another story they have mostly JSON and their config file style which I love)

Now there are plenty resources out there but it took me almost 1 day to come back to it. So here is a small solution that might help you someday.

You can download this small example from my gist repo.

Cheers all and happy coding.

Run an application if it’s not already running

Today we had a problem about an old Windows XP machine, running a 3rd party executable file. The executable sends info to a light board. Digging into problem I figured out that executable file is stopping with client interaction or by itself from time to time.

What we needed is to check if the app is running constantly. If it has been stopped we should run it. The process is not hanging so we don’t need anything there. Also the machine’s OS is old so we needed something other than WMI or PowerShell. I didn’t waste time to check if those are running on XP or not.

Here is how i solved it;

First I looked up for a batch file to do that and I came up with this thanks to Stack Overflow;

tasklist /FI "IMAGENAME eq TOTALCMD.EXE" /FO CSV &gt; search.log</code>

FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end

cd "C:\TotalCMD"


del search.log

I tested it with good old TOTALCMD.EXE. You can test this with your executable. It’s pure DOS.

Then I feel like unhappy with the unnecessary log file I created etc. Then I coded a small Console Application and compiled it with .NET 2.0.

Here is the code:

        static void Main(string[] args)
            if (args.Length != 2 || string.IsNullOrEmpty(args[0]) || string.IsNullOrEmpty(args[1]))
                Console.WriteLine(@"Rerun Process

            Process thisProc = Process.GetCurrentProcess();
            if (!IsProcessOpen(args[0].ToLower().Replace(".exe", "")))
                Console.WriteLine("{0} Process Is Already Running", args[0]);

        private static void StartProcess(string[] args)
            Process p = new Process();
            p.StartInfo.UseShellExecute = false;
            p.StartInfo.RedirectStandardOutput = true;
            p.StartInfo.FileName = Path.Combine(args[1], args[0]);
            catch (Exception ex)
                Console.WriteLine("Error Rerunning {0}\n{1}", Path.Combine(args[1], args[0]), ex.Message);

        private static bool IsProcessOpen(string name)
            foreach (Process clsProcess in Process.GetProcesses())
                if (clsProcess.ProcessName.ToLower().Contains(name))
                    return true;
            return false;

What this app does is pretty straightforward. It checks for the app name in processes. If not exits, it tries to run it. I used System.Diagnostics.Process to create a new process and redirect output to that process.

Process.GetProcesses() gets all the running processes. Here is the usage if you are not happy with it or want to alter it.

You can call the program like the example below;

RerunApp [Executable] [Path]
RerunApp “MYAPP.EXE” “C:\My Folder\My App\”

You can schedule it with windows task manager with the parameters you want and almost have Windows Service like behavior in old systems when running 3rd party executable files.

Here is the source;

Happy Coding 🙂

Delete All Your Yahoo Mail with AutoIt

UPDATE – I tried to cleanup my yahoo mail recently. They fixed the problem with the new interface. You can scroll down and select all messages. Then do what you want with them like moving to another folder or deleting.

Yesterday I had a problem. Yahoo didn’t allow me to delete all mail in a folder.

I had 20k mail in my old Customers’ folder. I redirected all application system warnings to my yahoo mail and I forgot to maintain it. After a year it was like 20k of mails. I tried everything to delete those mails by filtering etc. but Yahoo wouldn’t bend to my will. So I thought there should be another solution and I decided to do it with brute force.

There is a great application called AutoIt. I used it once to automate some UI tasks and develop automated tests. It provides you a scripting language that you can pretty much automate anything that you do with your mouse and keyboard.

You can download it here;

Also there is an AutoItRecorder application to record your mouse moves and key presses, then save all recording session as an AutoIt script. You can download it here;

Note: The shortcut for autoit-recorder launches an error but you can go manually to C:\Program Files (x86)\INET-Consulting\AutoIT-Recorder

So here is how I use both apps together;

1. Run autoit-recorder and do what you want to do. CTRL+BREAK to stop recorder.


2. Save your script to a file, it creates a large file, which every mouse coordinate recorded in it.

3. Right click to script file and select Edit Script. This opens the cute Auto-It editor. You don’t need another editor, Auto-It editor is great for these kind of operations. Also you can test your script by hitting F5 and stop by CTRL+BREAK in the editor.

4. So, you got large record file with lots of mousemove(x, y) coordinates. You actually don’t need all the coordinates to run this script. So find out lines like where the MouseDown(“left”) code exists. These are the lines that you actually click something. Also the first line with MouseMove command above this MouseDown line is the line that your final destination is. You only need these to go somewhere and click something on the screen. The other MouseMove lines between MouseDown commands may be deleted.

5. Also you may need to put some delay between MouseMove, MouseDown and other methods. If it’s a web page that you are automating your task, it may not respond your next move. We do it with putting a sleep command in between MouseMove and MouseDown methods.

Sleep (500) -> 500 is default that I use as milliseconds. But if I need to wait for a web page to reload I use at least Sleep(5000) (5 seconds) depending on the web page performance.

Here is my code for deleting yahoo mail. It works on Chrome browser with 1920 x 1080 resolution. You may change $pageCount variable for your need. Currently it deletes 70 pages of mails.
What my script does is straighforward. It deletes all mail page by page. To do that; first goes to checkbox at the upper left corner, clicks it. Then goes to delete button and clicks it. Goes to OK button and clicks it. Waits for 5 seconds. When page updates, it scrolls 2 lines down, so it presses Home key to keep coordinates intact. After that it goes and clicks to next page.

– Remember it deletes ALL mail in that folder. So be careful. I am not responsible if you delete wrong mail.
– If you have different resolution, different browser etc. You need to record your script or alter the coordinates below. 

dim $i
$i = 0

dim $pageCount
$pageCount = 70

While($i < $pageCount)
   MouseMove(196, 193)
   Sleep (500)
   Sleep (500)

   MouseMove(210, 173)
   Sleep (500)
   Sleep (500)

   MouseMove(796, 596)
   Sleep (500)
   Sleep (5000)

   Send("{HOME DOWN}")
   Send("{HOME UP}")
   Sleep (500)

   MouseMove(1708, 199)
   Sleep (500)
   Sleep (1000)

   $i = $i + 1

Happy Coding 🙂

What’s new in the latest NuGet 2.5 Update

Hi all,

We are all using beautiful NuGet extension on Visual Studio for a while now. NuGet is being developed by famous people like Scott Hanselman and Phil Haack. 🙂

For people who don’t know what NuGet is; NuGet is a package manager for Visual Studio, works with central package repository There are 12500 packages currently on the repository and generally it’s enough for your standard project needs. I mean almost every package is there. Which means no more web crawling for popular packages’ download links on the web, download and extract them to add to your project.

In Visual Studio, you can add package by right clicking to solution and select “Manage NuGet Packages for Solution”. Select “Online” from the left menu and search for package name from the right-top search box. When you find your package just click Install and tadaa, it’s done.


Back to our topic, in April 25 they’ve released the latest update. First let me tell you where to find this update; Open Visual Studio, go to Tools -> Extensions and Updates there you will find if your NuGet version needs to be updated. Or you can alternatively go to and download / install manually.

These are the what’s new items in the NuGet web site.

  • Option to overwrite content files
  • Automatic imports of msbuild targets
  • Different references per platform
  • Update All button
  • Improved project reference support for NuGet.exe Pack
  • Add a ‘Minimum NuGet Version’ property to packages
  • Dependencies are no longer unnecessarily updated during package installation
  • NuGet.exe outputs HTTP requests with detailed verbosity
  • NuGet.exe push supports UNC and directory sources
  • NuGet.exe supports explicitly-specified Config files
  • Support for C++ and WiX project types
  • Support for Mono/Xamarin projects targeting iOS, Android, and Mac
  • Package Restore improvements
    (a prototype of the new features will be available as a separate download/installation)

As for me I was waiting for performance updates. You know, when I’m not at home, NuGet package updates are always slow with the phone tethering, client’s or Starbucks’ wireless. 🙂 With this update NuGet does not update if the dependency is satisfied within the project. With that I mean if you have a package in your project lets say A V 1.0.0 and you are trying to install B which is dependant to A V 1.0.0. If repository had A 1.0.2 NuGet was trying to update A before this version. From now on A is not updated.

Also there is another update which is great for resolving package conflicts. You can now overwrite existing package files. If you are using Package Manager Console there is a new property for Update and Install-Package parameters which is FileConflictAction. You can choose whether to Overwrite to always overwrite or Ignore option to skip files. If left empty, you’ll get prompted for every file.

And there is another great feature for people like me working with old projects once in a while. When you open a (relatively) old project, got to NuGet Package Manager by right clicking the solution. Go to “Updates” at the left menu and select “All”. Now there is an “Update All” button to update all packages. So you can stay up to date with the latest version of packages you use in this project. Also keep in mind that NuGet has a cache folder in your computer under %LocalAppData%\NuGet\Cache. When you update a package it is saved into that folder and when you add the same package version into a project it is not downloaded again and again.

There are more updates listed above but for me these are the most important. Thank you NuGet team.