Friday, December 4, 2009

12 Important FAQ’s on VSTS Testing (Unit testing, load testing, automated testing, database testing and code coverage)

12 Important FAQ’s on VSTS Testing (Unit testing, load testing, automated testing, database testing and code coverage)




Introduction

This article has 12 important FAQ’s which covers unit testing, automated testing, data driven test, load/ performance test, code coverage , database testing and ordered testing.
I have collected around 400 FAQ questions and answers in Silverlight, Azure, VSTS, WCF, WPF, WWF, SharePoint, Design Patterns, UML etc. Feel free to download these FAQ PDF’s from my site http://www.questpond.com/

VSTS 2010 download

This article uses VSTS heavily so in case you do not have the BETA download the same from:-
http://www.microsoft.com/downloads/details.aspx?FamilyID=255fc5f1-15af-4fe7-be4d-263a2621144b&displaylang=en

What is Unit testing?

Unit testing is validation and verification methodology where the developers test the individual units of source code.
Some key points to be remembered from: -
  • A unit is the smallest part in the application which can be tested. So it can be either a method. Function or class.
  • These tests are conducted during development.
  • Unit test belongs to the white box testing category.

What are the different ways by which you can do unit testing with .NET?

There are 2 primary / accepted ways of doing unit testing:-
  • NUNIT
  • Visual studio team edition test system

Can we start with a simple NUNIT example?

Ok so let’s say we want to do unit testing for a simple class ‘clsInvoiceCalculation’ shown below. This class calculates total cost by taking per product cost and number of products as input.
public class clsInvoiceCalculation
    {
        public int CalculateCost(int intPerProductCost, int intNumberOfProducts)
        {
            return intPerProductCost * intNumberOfProducts;
        }
}


Let’s say we want to execute the below test case on the above class using NUNIT.



Step 1:- The first step is to download to the NUNIT software from
http://nunit.org/index.php?p=download
Step 2:- Create a new C# class project and add reference to “C:\Program Files\NUnit 2.5.2\bin\net-2.0\framework\nunit.framework.dll”. We also need to add reference to the class project which we are testing i.e. the invoice project.


Step 3:- Add reference to the NUNIT and your invoice project in your code.
using NUnit.Framework;
using Invoice;


Step 4:- We need to create a simple class with ‘TestFixture’ as attribute. We then to create a method attributed with ‘Test’ which will have our test.


You can see in the above figure how the ‘TestInvoiceCalculation’ method creates a object of ‘clsInvoiceCalculation’ , passes the values and checks if the returned values has the expected results. ‘Assert’ command is used to check if the expected output and the results return match.


Step 5:- Once we are finished with our test case we need to compile the same to create a DLL. Once the DLL compilation is done go to program files – nunit and click on nunit. Now the nunit user interface will open. Click on file – open project and select your test DLL.

Once you select your DLL you will get a screen as shown below. On the right hand side you can see your test class with test function which has the test case. Check the test function and hit run. Now if the test case passes you will see a full green color or else you will see a red color with details of why the test case fails.


Screenshot when the test case fails.



How can we make NUNIT test cases data driven?

In the previous questions we had hardcoded the test data in the NUNIT test case itself. But in real time you would like to have test data inputs coming from a XML file or a database, in other words you would like to create data driven test cases. In order to create data driven in NUNIT we can attribute the unit test method with ‘TestCaseSource’ attribute as shown in the below figure. In ‘TestCaseSource’ we need to provide the function which will return the test case data, in this case it is ‘TestCases’.


Below is the code snippet which provides dynamic data to unit test method. The function which is providing the dynamic data should return ‘IEnumerable’. NUNIT provides something called as ‘TestCaseData’ class. This class defines the test case data for NUNIT. The class provides ways by which you can specify input and the expected output from the test case.

Finally to return test case by test case we need to use ‘yield’ keyword with for each. You can see the below code snippet to understand how yield works.


How can we do unit testing using VSTS test?

If you do not have VSTS get the beta @
http://www.microsoft.com/downloads/details.aspx?FamilyID=255fc5f1-15af-4fe7-be4d-263a2621144b&displaylang=en

In order to do unit testing using VSTS right click on the visual studio solution explorer and select add new project. In the test project right click and select add new test you should be popped with a dialog box as shown in the below figure. From the dialog box select ‘Basic Unit test’.


In order to create a function which will execute our unit test you need to attribute the function with “[TestMethod]”. If you remember in Nunit it was ‘[Test]’. The assert function does not change.

Once you have done the unit testing coding compile the same, click on test, run and select ‘Tests in the current context”.


Once you do the previous test, the test starts running from pending to in progress and finally showing the results.

In case your test fails and you would like to see detail results of the same double click on the test to see a detailed result as shown below. The below results shows that the test failed because it was expecting “200” as the output and it got “30”.


How can we create data driven unit test using VSTS Test?

Creating data driven unit test is pretty simple in VSTS. First create a simple table with test data values. For instance you can see in the below figure we have created a table with 3 fields i.e. 2 input fields and 1 expected value. To consume this table apply ‘DataSource’ attribute with a proper database connection string as shown in the below figure.

To get field data from the table we can use the ‘DataRow’ with index.


We have 3 test cases in the table, so when we executed the above test’s we got the below test results in our visual studio IDE result window.



How can we do automated testing using VSTS?

In order to do automate testing in VSTS we need use the ‘Web test’ template. So click on add new test and select ‘Web test’.


Once you have selected web test browser will open with a record, pause and stop button as shown in the below figure. Once you start posting and requesting, the URL recorder starts recording every request and response as shown in the below figure. On the right you can see how the recorder has recorded all the get and posts.


Below is a snapshot of simple login screen which is recorded. There are two requests, the first request is for the login page and the second request is when you post the login page with userid and password. You can see the values i.e. ‘Admin’ and ‘Admin’ in userid and password textboxes.


We will also need to define in which conditions the test will pass. Now right click on the second post and select ‘Add validation rule’ as shown in the below figure.


Select the rule saying if anywhere on the browser response we found ‘Logged in’ text that means that the test is passed.






If you run the test you will get the executed values with the test specifying it’s a pass or fail.





 



 



 



 



 



 



 



How can we make the automated test data driven in VSTS?

Once you have created the web test, right click on the web test and click add data source as shown in the below figure.



 Once you have added the data source you can then specify the database fields as inputs to the text boxes as shown in the below figure.


We need to perform one more step to ensure that the data driven test runs fine. Right click on the testrunconfig file and select one per data row as shown in the next figure.





Once you are done you can run the test and see how VSTS picks up row by row test cases and executes the test. You can see in the below figure the first test case has failed while the second test case has passed.


How can we do coverage testing using VSTS?


Code coverage is a 3 steps process as shown below. The first step is to
enable code coverage. So right click on the ‘.testrunconfig’ file in the
solution explorer as shown in the below figure.





The next step is to select the assembly / DLL which we want to monitor for
code coverage, below is the figure which shows the same.



Once you run the test, right click on the test results and select code
coverage results. You will be shown a details result as shown below where you
can see which part of your application is covered and tested.



What are the different steps involved to execute performance test using VSTS?

I will be updating in 2 days the pictures are not fitting well.Coming soon…..

I have heard about Database testing in VSTS, what does it do?

I will be updating in 2 days the pictures are not fitting well.Coming soon…..


What is ordered testing?

I will be updating in 2 days the pictures are not fitting well.Coming soon…..



Monday, November 23, 2009

5 simple steps to execute unit testing using

5 simple steps to execute unit testing using NUNIT


Introduction and Goal

Unit testing is validation and verification
methodology where the developers test the individual units of source code. In
this tutorial we will try to understand how we can do unit testing using NUNIT
framework. This tutorial we will try to understand the 5 important steps to do
unit testing using NUNIT.
Please feel free to download by 400 .NET interview question free EBook from
http://www.questpond.com which
covers sections like basic .NET, Design patterns,3.5,SQL Server,UML and lot of
other sections.
The example which we will test Below is a simple class ‘clsInvoiceCalculation’
which calculates total cost by taking per product cost and number of products as
input.
public class clsInvoiceCalculation

{

public int CalculateCost(int intPerProductCost, int intNumberOfProducts)

{

return intPerProductCost * intNumberOfProducts;

}

}

Let’s say we want to execute the below test
case on the above class using NUNIT.














Per product cost

Number of products
Expected
output
10 20 200 ( test case passed)


The 5 basic steps to execute the unit test using NUNIT

Step 1:- The first step is to download
to the NUNIT software from
http://nunit.org/index.php?p=download

Step 2:- Create a new C# class project and add reference to “C:\Program
Files\NUnit 2.5.2\bin\net-2.0\framework\nunit.framework.dll”. We also need to
add reference to the class project which we are testing i.e. the invoice
project.



Step 3:- Add reference to the NUNIT and
your invoice project in your code.


using NUnit.Framework;
using Invoice;

Step 4 :- We need to create a simple
class with ‘TestFixture’ as attribute. We then to create a method attributed
with ‘Test’ which will have our test.


You can see in the above figure how the
‘TestInvoiceCalculation’ method creates an object of ‘clsInvoiceCalculation’ ,
passes the values and checks if the returned values have the expected results.
‘Assert’ command is used to check if the expected output and the results return
match.



Step 5:- Once we are finished with our
test case we need to compile the same to create a DLL. Once the DLL compilation
is done go to program files – nunit and click on nunit. Now the nunit user
interface will open. Click on file – open project and select your test DLL.
Once you select your DLL you will get a screen as shown below. On the right hand
side you can see your test class with test function which has the test case.
Check the test function and hit run. Now if the test case passes you will see a
full green color or else you will see a red color with details of why the test
case fails.



Screenshot when the test case fails.

Monday, October 12, 2009

.NET 4.0 FAQ Part 1 -- The DLR

Introduction
Where do I get .Net 4.0 from?

What are the important new features in .NET 4.0?

What’s the most important new feature of .NET 4.0?
What is DLR in .NET 4.0 framework?
Can you give more details about DLR subsystem?

How can we consume an object from dynamic language and expose a class to dynamic languages?

Can we see a sample of ‘Dynamic’ objects?

What’s the difference between ‘Dynamic’, ‘Object’ and reflection?

What are the advantages and disadvantage of dynamic keyword?

What are expando objects?

Can we implement our own ‘Expando’ object?

What is the advantage of using custom ‘Expando’ class?

What are IDynamicMetaObjectProvider and DynamicMetaObject?

Can we see performance difference between reflection and dynamic object execution?

Thanks, Thanks and Thanks
.NET 4.0 detail list of new features



Introduction

In this article we will discuss about what new features are provided by .NET framework 4.0. We will then take up the DLR feature and discuss about ‘dynamic’ and ‘expando’ objects. We will also create a custom ‘expando’ class and see what benefits we can get from the same. Many developers mistake ‘dynamic’ objects where made to replace ‘reflection’ and ‘object’ type, we will try to remove this misconception also.
Sorry for the repost. I have deleted the old article due to image upload issues and uploaded again.

Please feel free to download my free 500 question and answer eBook which covers .NET 4.0 , ASP.NET , design patterns, silver light, LINQ, SQL Server , WCF , WPF, WWF@
http://www.questpond.com/

Where do I get .Net 4.0 from?

You can download .NET 4.0 beta from
http://www.microsoft.com/downloads/details.aspx?FamilyID=ee2118cc-51cd-46ad-ab17-af6fff7538c9&displaylang=en

What are the important new features in .NET 4.0?

Rather than walking through the 100 new features list, let’s concentrate on the top 3 features which we think are important. If you are interested to see the detail list of new features click here.
• Windows work flow and WCF 4.0:- This is a major change in 4.0. In WCF they have introduced simplified configuration, discovery, routing service, REST improvements and workflow services. In WWF they have made changes to the core programming model of workflow. Programming model has been made more simple and robust. The biggest thing is the integration between WCF and WWF.
• Dynamic Language runtime: - DLR adds dynamic programming capability to .NET 4.0 CLR. We will talk more about it as this FAQ moves ahead.
• Parallel extensions: - This will help to support parallel computing for multi-core systems. .NET 4.0 has PLINQ in the LINQ engine to support parallel execution. TPL (Task parallel library) is introduced which exposes parallel constructs like parallel ‘For’ and ‘ForEach’ loops, using regular method calls and delegates.
We will be talking in more details of the above features in the coming sections.


What’s the most important new feature of .NET 4.0?

WCF and WWF new features are one of the most interesting features of all. Especially the new programming model of WWF and its integration with WCF will be an interesting thing to watch.
DLR, parallel programming and other new features somehow just seem to be brownie points rather than compelling features. Kathleen Dollard's talks about it in more details
http://msmvps.com/blogs/kathleen/archive/2009/01/07/the-most-important-feature-of-net-4-0.aspx
We will be seeing more of these features as we continue with the FAQ.

What is DLR in .NET 4.0 framework?

DLR (Dynamic language runtime) is set of services which add dynamic programming capability to CLR. DLR makes dynamic languages like LISP, Javascript, PHP,Ruby to run on .NET framework.


There are two types of languages statically typed languages and dynamically typed languages. In statically typed languages you need to specify the object during design time / compile time. Dynamically typed languages can identify the object during runtime. .NET . DLR helps you to host code written in dynamic languages on top of your CLR.


Due to DLR runtime, dynamic languages like ruby, python, JavaScript etc can integrate and run seamlessly with CLR. DLR thus helps to build the best experience for your favorite dynamic language. Your code becomes much cleaner and seamless while integrating with the dynamic languages.


Integration with DLR is not limited to dynamic languages. You can also call MS office components in a much cleaner way by using COM interop binder.
One of the important advantages of DLR is that it provides one central and unified subsystem for dynamic language integration.


Can you give more details about DLR subsystem?

DLR has 3 basic subsystems:-

• Expression trees: - By this we can express language semantics in form of AST (Abstract syntax tree). DLR dynamically generates code using the AST which can be executed by the CLR runtime. An expression tree is a main player which helps to run various dynamic languages javascript, ruby with CLR.
• Call site caching: - When you make method calls to dynamic objects DLR caches information about those method calls. For the other subsequent calls to the method DLR uses the cache history information for fast dispatch.
• Dynamic object interoperability (DOI):- DOI has set of classes which can be used to create dynamic objects. These classes can be used by developers to create classes which can be used in dynamic and static languages.
We will be covering all the above features in more detail in the coming FAQ sections.



How can we consume an object from dynamic language and expose a class to dynamic languages?

To consume a class created in DLR supported dynamic languages we can use the ‘Dynamic’ keyword. For exposing our classes to DLR aware languages we can use the ‘Expando’ class.
So when you want to consume a class constructed in Python , Ruby , Javascript , COM languages etc we need to use the dynamic object to reference the object. If you want your classes to be consumed by dynamic languages you need to create your class by inheriting the ‘Expando’ class. These classes can then be consumed by the dynamic languages. We will be seeing both these classes in the coming section.


Do not forget to download help document for library authors regarding how to enable the dynamic language across platform using DLR
http://dlr.codeplex.com/Wiki/View.aspx?title=Docs

Can we see a sample of ‘Dynamic’ objects?

We had already discussed that ‘Dynamic’ objects helps to consume objects which are created in dynamic languages which support DLR.The dynamic keyword is a part of dynamic object interoperability subsystem.
If you assign an object to a dynamic type variable (dynamic x=new SomeClass()), all method calls, property invocations, and operator invocations on ‘x’ will be delayed till runtime, and the compiler won't perform any type checks for ‘x’ at compile time.
Consider the below code snippet where we are trying to do method calls to excel application using interop services.
// Get the running object of the excel application
object objApp = System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application");
// Invoke the member dynamically
object x = objApp.GetType().InvokeMember("Name", System.Reflection.BindingFlags.GetProperty, null, objApp, null);
// Finally get the value by type casting
MessageBox.Show(x.ToString());

The same code we now write using ‘dynamic’ keyword.
// Get the object using
dynamic objApp1 = System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application");
// Call the
MessageBox.Show(objApp1.Name);

You can clearly notice the simplification of property invocation syntax. The ‘invokemember’ is pretty cryptic and prone to errors. Using ‘dynamic’ keyword we can see how the code is simplified.


If you try to view the properties in VS IDE you will see a message stating that you can only evaluate during runtime.


What’s the difference between ‘Dynamic’, ‘Object’ and reflection?

Many developers think that ‘Dynamic’ objects where introduced to replace to ‘Reflection’ or the ‘Object’ data type. The main goal of ‘Dynamic’ object is to consume objects created in dynamic languages seamlessly in statically typed languages. But due to this feature some of its goals got overlapped with reflection and object data type.
Eventually it will replace reflection and object data types due to simplified code and caching advantages. The main goal of dynamic object was never introduced in the first place to replace ‘reflection’ or object data type, but due to overlapping features it did.


What are the advantages and disadvantage of dynamic keyword?

We all still remember how we talked bad about VB6 (Well I loved the language) variant keyword and we all appreciated how .NET brought in the compile time check feature, well so why are we changing now.
Well, bad developers will write bad code with the best programming language and good developers will fly with the worst programming language. Dynamic keyword is a good tool to reduce complexity and it’s a curse when not used properly.

So advantages of Dynamic keyword:-

• Helps you interop between dynamic languages.
• Eliminates bad reflection code and simplifies code complexity.
• Improves performance with method call caching.
Disadvantages:-

• Will hit performance if used with strongly typed language.

What are expando objects?

‘Expando’ objects serve the other side of interoperability i.e. enabling your custom classes to be consumed in dynamic languages. So you can create an object of ‘Expando’ class and pass to dynamic languages like ruby, javascript, python etc. ‘Expando’ objects helps to add properties on fly. It’s an efficient implementation of dynamic property bag. In order to use ‘ExpandoObject’ we first import ‘System.Dynamic’ namespace.
using System.Dynamic;
We then create the object of ‘ExpandoObject’ and assign it to an object which created from ‘Dynamic’ class type. Please note if we have used ‘Dynamic’ objects and not expand objects as we still do not know what properties are going to be created during runtime.

dynamic obj = new ExpandoObject();

For creating dynamic property we just need to write the property name and set the value.

obj.Customername = "Some Customer Name";

Finally we display the value.
MessageBox.Show(obj.Customername);


Can we implement our own ‘Expando’ object?

‘Expando’ object internally is nothing but properties added to a collection. So you can create your own version of ‘Expando’ object.
The first thing we need to do is inherit from ‘DynamicObject’ class.
public class clsMyExpando : DynamicObject
{
}

As said previously we need to define a collection where we can save properties. In the second step we have created a dictionary object to maintain the properties in the collection.
public class clsMyExpando : DynamicObject
{
Dictionary items= new Dictionary();
}

We can now use the ‘TryGetMember’ and ‘SetGetMember’ to define our get and set properties.
public class clsMyExpando : DynamicObject
{
Dictionary items = new Dictionary();
public override bool TryGetMember(GetMemberBinder binder, out object result)
{
return items.TryGetValue(binder.Name, out result);
}
public override bool TrySetMember(SetMemberBinder binder, object value)
{
items[binder.Name] = value;
return true;
}}

We can now create object of our customer ‘expando’ class and assign the same to the dynamic class reference. In the below code snippet we have assigned a dynamic property called as ‘Name’.
dynamic obj = new clsMyExpando();
obj.Name = "Dynamic Property";


What is the advantage of using custom ‘Expando’ class?

Custom ‘Expando’ object can be used to gain performance. In case your class has some static properties and some dynamic properties, then you can create static properties in the custom expand class itself as shown in the below code. So when the static properties of the object are called it will not make calls to the dictionary collection thus increasing performance. DLR engine first tries to call the property names rather than calling the ‘TryGetMember’ or ‘TrySetMember’.

First thing avoid ‘expando’ custom classes if you do not have dynamic property requirement and you do not want to communicate with dynamic languages. In case you have dynamic property requirement ensure that properties which you are very sure are added to the class and dynamic properties are implemented by inheriting the dynamicobject class.
public class clsMyExpando : DynamicObject
   {
Dictionary items
       = new Dictionary();
  
       private string _Name;
       public string Name
       {
           get
           {
               return _Name;
           }
           set
           {
               _Name = value;
           }
       }
       public override bool TryGetMember(
           GetMemberBinder binder, out object result)
       {
           return items.TryGetValue(binder.Name, out result);
       }

       public override bool TrySetMember(
           SetMemberBinder binder, object value)
       {
           items[binder.Name] = value;
           return true;
       }

   }


What are IDynamicMetaObjectProvider and DynamicMetaObject?

‘Dynamic’ objects implement ‘IDynamicMetaObjectProvider’ and return ‘DynamicMetaObject’. Both these interfaces are core classes which implement interoperability between dynamic languages. So when should we use it:-

  • If we want to implement our custom logic for fast dynamic property retrieval we can implement ‘IDynamicMetaObjectProvider’ and ‘DynamicMetaObject’. For instance you would like to provide a fast algorithm to retrieve the most used properties. So you can plug-in the algorithm using ‘IDynamicMetaObjectProvider’.


Can we see performance difference between reflection and dynamic object execution?

Coming soon….

Thanks, Thanks and Thanks


http://tomlev2.wordpress.com/category/code-sample/
has sample code which
shows how to implement dynamic objects.

http://dlr.codeplex.com/Wiki/View.aspx?title=Docs%20and%20specs
:- Codeplex
link for the DLR project.

http://msmvps.com/blogs/kathleen/archive/2009/01/07/the-most-important-feature-of-net-4-0.aspx

:- Discusses about the most important feature of .NET 4.0

http://blogs.msdn.com/brada/archive/2008/10/29/net-framework-4-poster.aspx

:- A nice 4.0 complete posture , ensure you have silver light plug-in to take
advantage of the zoom in feature.

http://www.hanselman.com/blog/C4AndTheDynamicKeywordWhirlwindTourAroundNET4AndVisualStudio2010Beta1.aspx

:- Sir Hansel talks about the dynamic keyword.
http://www.codeproject.com/KB/cs/dynamicfun.aspx
:- Dynamic example
implementation.

http://en.wikipedia.org/wiki/Dynamic_Language_Runtime
:- Wiki link for DLR.
http://msdn.microsoft.com/en-us/library/dd233052(VS.100).aspx
:- Download
link for 4.0 and VS 2010 .

http://www.developerfusion.com/article/9576/the-future-of-net-languages/2/

:- Discusses about new language features of 4.0 , I have not seen anything
simpler than this on the web.

http://www.gotnet.biz/Blog/file.axd?file=2009%2F6%2FHow+I+Learned+to+Love+Metaprogramming+-+CodeStock+2009.pdf

:- Nice PDF by Kevin MVP for meta programming.

.NET 4.0 detail list of new features

Thanks to
http://blogs.msdn.com/brada/archive/2008/10/29/net-framework-4-poster.aspx publish a detail poster of .NET 4.0 features.

Tuesday, September 29, 2009

Best Practices No 5: - Detecting .NET application memory leaks





Introduction

Memory leaks in .NET application have always being programmer’s nightmare. Memory leaks are biggest problems when it comes to production servers. Productions servers normally need to run with least down time. Memory leaks grow slowly and after sometime they bring down the server by consuming huge chunks of memory. Maximum time people reboot the system, make it work temporarily and send a sorry note to the customer for the downtime.

Please feel free to download my free 500 question and answer eBook which covers .NET , ASP.NET , SQL Server , WCF , WPF , WWF@ http://www.questpond.com .


Avoid task manager to detect memory leak

Using private bytes performance counters to detect memory leak

3 step process to investigate memory leak

What is the type of memory leak? Total Memory = Managed memory + unmanaged memory

How is the memory leak happening?

Where is the memory leak?

Source code

Thanks, Thanks and Thanks



Avoid task manager to detect memory leak

The first and foremost task is to confirm that there is memory leak. Many developers use windows task manager to confirm, is there a memory leak in the application?. Using task manager is not only misleading but it also does not give much information about where the memory leak is.


First let’s try to understand how the task manager memory information is misleading. Task manager shows working set memory and not the actual memory used, ok so what does that mean. This memory is the allocated memory and not the used memory. Adding further some memory from the working set can be shared by other processes / application.


So the working set memory can big in amount than actual memory used.

Using private bytes performance counters to detect memory leak

In order to get right amount of memory consumed by the application we need to track the private bytes consumed by the application. Private bytes are those memory areas which are not shared by other application. In order to detect private bytes consumed by an application we need to use performance counters.
Below are the steps we need to follow to track private bytes in an application using performance counters:-
  • Start you application which has memory leak and keep it running.
  • Click start à Goto run and type ‘perfmon’.
  • Delete all the current performance counters by selecting the counter and deleting the same by hitting the delete button.
  • Right click à select ‘Add counters’ à select ‘process’ from performance object.
  • From the counter list select ‘Private bytes’.
  • From the instance list select the application which you want to test memory leak for.
If you application shows a steady increase in private bytes value that means we have a memory leak issue here. You can see in the below figure how private bytes value is increasing steadily thus confirming that application has memory leak.


The above graph shows a linear increase but in live implementation it can take hours to show the uptrend sign. In order to check memory leak you need to run the performance counter for hours or probably days together on production server to check if really there is a memory leak.

3 step process to investigate memory leak

Once we have confirmed that there is a memory leak, it’s time to investigate the root problem of the memory leak. We will divide our journey to the solution in 3 phases what, how and where.
  • What: - We will first try to investigate what is the type of memory leak, is it a managed memory leak or an unmanaged memory leak.
  • How: - What is really causing the memory leak. Is it the connection object, some kind of file who handle is not closed etc?
  • Where: - Which function / routine or logic is causing the memory leak.

What is the type of memory leak? Total Memory = Managed memory + unmanaged memory

Before we try to understand what the type of leak is, let’s try to understand how memory is allocated in .Net applications. .NET application have two types of memory managed memory and unmanaged memory. Managed memory is controlled by garbage collection while unmanaged memory is outside of garbage collectors boundary.


So the first thing we need to ensure what is the type of memory leak is it managed leak or unmanaged leak. In order to detect if it’s a managed leak or unmanaged leak we need to measure two performance counters.
The first one is the private bytes counter for the application which we have already seen in the previous session.
The second counter which we need to add is ‘Bytes in all heaps’. So select ‘.NET CLR memory’ in the performance object, from the counter list select ‘Bytes in all heaps’ and the select the application which has the memory leak.


Private bytes are the total memory consumed by the application. Bytes in all heaps are the memory consumed by the managed code. So the equation becomes something as shown in the below figure.

Un Managed memory + Bytes in all helps = private bytes, so if we want to find out unmanaged memory we can always subtract the bytes in all heaps from the private bytes.
Now we will make two statements:-
  • If the private bytes increase and bytes in all heaps remain constant that means it’s an unmanaged memory leak.
  • If the bytes in all heaps increase linearly that means it’s a managed memory leak.
Below is a typical screenshot of unmanaged leak. You can see private bytes are increasing while bytes in heaps remain constant


Below is a typical screen shot of a managed leak. Bytes in all heaps are increasing.


How is the memory leak happening?

Now that we have answered what type of memory is leaking it’s time to see how is the memory leaking. In other words who is causing the memory leak ?.
So let’s inject an unmanaged memory leak by calling ‘Marshal.AllocHGlobal’ function. This function allocates unmanaged memory and thus injecting unmanaged memory leak in the application. This command is run within the timer number of times to cause huge unmanaged leak.
private void timerUnManaged_Tick(object sender, EventArgs e)
{
 Marshal.AllocHGlobal(7000);
}


It’s very difficult to inject a managed leak as GC ensures that the memory is reclaimed. In order to keep things simple we simulate a managed memory leak by creating lot of brush objects and adding them to a list which is a class level variable. It’s a simulation and not a managed leak. Once the application is closed this memory will be reclaimed.

private void timerManaged_Tick(object sender, EventArgs e)
 {
            for (int i = 0; i < 10000; i++)
            {
                Brush obj = new SolidBrush(Color.Blue);
                objBrushes.Add(obj);
            }
        }

In case you are interested to know how leaks can happen in managed memory you can refer to weak handler for more information
http://msdn.microsoft.com/en-us/library/aa970850.aspx .
The next step is to download ‘debugdiag’ tool from
http://www.microsoft.com/DOWNLOADS/details.aspx?FamilyID=28bd5941-c458-46f1-b24d-f60151d875a3&displaylang=en
Start the debug diagnostic tool and select ‘Memory and handle leak’ and click next.


Select the process in which you want to detect memory leak.


Finally select ‘Activate the rule now’.


Now let the application run and ‘Debugdiag’ tool will run at the backend monitoring memory issues.


Once done click on start analysis and let the tool the analysis.


You should get a detail HTML report which shows how unmanaged memory was allocated. In our code we had allocated huge unmanaged memory using ‘AllochGlobal’ which is shown in the report below.


Managed memory leak of brushes are shown using ‘GdiPlus.dll’ in the below HTML report.



Where is the memory leak?

Once you know the source of memory leak is, it’s time to find out which logic is causing the memory leak. There is no automated tool to detect logic which caused memory leaks. You need to manually go in your code and take the pointers provided by ‘debugdiag’ to conclude in which places the issues are.
For instance from the report it’s clear that ‘AllocHGlobal’ is causing the unmanaged leak while one of the objects of GDI is causing the managed leak. Using these details we need to them go in the code to see where exactly the issue lies.

Source code

You can download the source code from the top of this article which can help you inject memory leak.

Thanks, Thanks and Thanks

It would be unfair on my part to say that the above article is completely my knowledge. Thanks for all the lovely people down who have written articles so that one day someone like me can be benefit.