Priyang Patel’s Weblog


Test your .NET skills

Posted in Check your skills now by priyangpatel on April 9, 2008

Shared assemblies are installed where?

(A) System Assembly Cache (B) Global Assembly Cache

(C) Machine Assembly Cache (D) Windows Assembly Cache

Which of the following is not a method of System.Object?

(A) GetType (B) ToString (C) Equals (D) Clone

What is the term used to describe the process the Runtime uses to find an assembly?

(A) Locating (B) Probing (C) Searchi

What is the default value for the Char type?
Ans:

Which of the following is a value type, and not a reference type?
Ans: enum

What is the default version of an assembly?
Ans: 1:0:0:0


Do more with C# & Functional programming – Treat Code as Data

Posted in Visual Studio 2008 by priyangpatel on April 8, 2008

Take advantage of new features in C# 3.0 that let you treat code as data — and save time over more traditional, imperative approaches to programming.

LINQ and C# 3.0 will force you to learn some new idioms in your everyday programming tasks. The idiom that has the most far-reaching consequences is learning to treat code as data. Every time you build a query expression, you’re treating code as data. You’re passing around bits of code or logic as parameters to a method. The methods in the LINQ libraries don’t return data, but delegates that can create the data when you need them. This might feel strange, but it’s not as far-fetched as it sounds. From the outside, it shouldn’t matter whether a data value is cached or is computed from first principles. For example, this bit of code shouldn’t seem scary:

var num = Math.Sin(Math.PI / 4.0);DATA[ */ <!-- Hide from old browsers adserver = "http://ad101com.adbureau.net"; target = "/site=VSM/area=columns/aamsz=336x280/pos=m03/target="; random = Math.round(Math.random() * 100000000); if (!pageNum) var pageNum = Math.round(Math.random() * 100000000); document.write(''); document.write(''); // End Hide --> /*]]> */  

Do you know whether Math.Sin computes the sine of the angle when you call it? In many libraries, numeric methods like these are implemented using a large lookup table. The method simply returns the value in the lookup table, or makes a linear interpolation of the two nearest values if the request angle isn’t in the lookup table.

From your perspective as the one initiating a call to this method, it doesn’t matter. The contract of the method is to return a value corresponding to the input parameter. How it happens isn’t important.

There’s one key point to consider here: I said that how the calculation happens isn’t important. That’s true — unless the calculation depends on some side effect. Sine doesn’t depend on any side effects, so it works no matter what. Other methods aren’t pure functions. For example, this method depends on information beyond the current parameters:

public static decimal CurrentTemperature(int zipCode)

Calling this method at different times with the same input gives different answers. Temperature varies over the course of a day. Substituting the answer (a number that won’t change) for a function (some way to find the current answer) doesn’t work.

There are also quite a few gray areas, where the answer to whether or not you can substitute a function for data or vice versa turns out to be: “It depends.”

Methods as Parameters Are familiar
You’ve worked with methods as parameters before. The List.RemoveAll() method uses a predicate to determine what items to remove from a list. This predicate is a pure function; it depends only on its input:

numbers.RemoveAll((n) => n > 20);

You can also use the ForEach method to print a list of numbers:

numbers.ForEach((n) => Console.WriteLine(n));

However, this bit of code is much more complicated, and it has dependencies related to how the internal algorithm is implemented. For example, this code removes all numbers from a list of integers where the number is greater than its index in the list:

numbers.RemoveAll((n) => n >
numbers.IndexOf(n));

This isn’t a pure function because the output depends on something other than the input. Namely, it depends on the current program’s state. Does RemoveAll() remove each element as it’s processed? That would change the current index of the items. Or, does it perform all the tests and then perform a bulk remove? In which order does it examine the list? First to last? Or last to first? The results of this code will depend on the answers to these questions. (For the record, RemoveAll performs all of the tests, and then removes all of the items. Knowing that doesn’t make this code any more excusable, however.)

There are quite a few new techniques and concepts that you use when you begin to think of your code as data. You’ll be using lambda expressions, deferred execution, closures, higher order functions, and function composition. And, unlike switching to a pure functional language, you’ll likely be mixing your current object-oriented style of programming with this new functional approach, where functions are data. Yes, it’s a steep learning curve, but the ends are worth the effort.

It’s possible to implement every one of the techniques just mentioned in C# 2.0, but you can do so much more easily in C# 3.0 because the syntax is so much cleaner, so I’ll show you how to implement these techniques using C# 3.0’s syntax.

A lambda expression is nothing more than a simplified way to express a method. (In the formal definition, lambda expressions shouldn’t have any side effects, but C# doesn’t enforce this rule.) Consider this statement from earlier in the article:

numbers.ForEach((n) => Console.WriteLine(n));

This is nothing more than a concise way of saying:

numbers.ForEach(delegate(int n )
{
Console.WriteLine(n);
});

Using the lambda syntax, the compiler infers the type of the parameter (an integer) and the type of the return (void in this case). There’s nothing too earth-shattering here, but you must keep the key point in mind: You’re passing a function (in the form of a delegate) to the ForEach method. Essentially, the parameter is describing the algorithm. That’s a fundamental change in terms of how you think about your code.

Deferred execution changes your thinking about code in some important ways (see Listing 1). Now consider the output from a test that runs the code in Listing 1:

2/19/2008 2:18:14 PM
2/19/2008 2:18:23 PM
2/19/2008 2:18:32 PM
2/19/2008 2:18:41 PM
2/19/2008 2:18:50 PM
Do it again
2/19/2008 2:19:08 PM
2/19/2008 2:19:17 PM
2/19/2008 2:19:26 PM
2/19/2008 2:19:35 PM
2/19/2008 2:19:44 PM

I chose to use the DateTime.Now property to generate the sequence because it gives you a clear picture of when operations happen. You can see that there’s a nine-second delay between generating the next sequence item. Also, when you examine the sequence again, you get a totally different sequence of times. The sequence is an algorithm that can create values, but the sequence isn’t the values themselves. Again, you’re now treating code as data. The sequence of values doesn’t exist until you ask for it. Even after you ask for it, the variable sequence still doesn’t contain values. If you examine it again, you see a new sequence of values.

Closures Introduce Bound Variables
One more bit of dry computer science, and then we can move onto the more interesting ramifications of treating code as data. Assume you alter Listing 1 to create different behavior (see Listing 2). Now, examine its output:

2/19/2008 2:34:27 PM
2/19/2008 2:34:27 PM
2/19/2008 2:34:27 PM
2/19/2008 2:34:27 PM
2/19/2008 2:34:27 PM
Do it again
2/19/2008 2:35:21 PM
2/19/2008 2:35:21 PM
2/19/2008 2:35:21 PM
2/19/2008 2:35:21 PM
2/19/2008 2:35:21 PM

What changed? Well, the compiler created a closure containing Current as a bound variable. A closure is a way to inject local variables (or parameters) into the body of the lambda expression. Those local variables are referred to as “bound variables.” The closure contains both the local variables and lambda expressions. The code is implemented in such a way that changes to the bound variable outside of the lambda expression are reflected inside the lambda expression, and vice versa. In this piece of code, you see that the generator returns a sequence containing five copies of the current time. Later, you modify the value of the bound variable (current), outside the lambda. The next time you enumerate the sequence, you get five copies of the newer version of the variable.

Putting This to Work
All of this is wonderful, but why should you care? Using this kind of algorithm can help you create snippets of code to reuse later. Think about how many times you’ve written code like this:

var currentCustmers =
From c in customerList
Where c.Orders.Count > 0
Select c;

Because that variable contains code, not just data, you’re actually creating a bit of logic that gives you the current customer list when requested, rather than when the logic executed originally. Instead of copying that code everywhere, you need only access that code when you need it.

Another advantage is that you can work with sequences that are far too large to examine or process on your local machine. You can chain these sequence operators together. When you do that, you’re not making new copies of data. You’re manipulating the algorithm and the functions, and that new set of functions provides a new answer when you examine it.

You can see this at work by converting an ancient numeric algorithm from imperative to declarative. You can find full source for this conversion in the online code, but I’ll highlight the key points in this article’s inline code. Hero of Alexandria’s algorithm for finding square roots lets you find the square root of any number S, by starting with a guess G (S-1 works fine). The next guess is computed using the formula ((S / G + G) / 2). For example, to find the square root of 2, you start with 1 as the guess. The next guess is 1.5 ((2 / 1 + 1) / 2). The next guess is 1.416 ((2 / 1.5 + 1.5) / 2). After enough iterations, the answer converges on the square root.

You begin with a classic C# imperative implementation of Hero’s algorithm (see Listing 3). Next, you make a set of changes and re-implement this algorithm to make it more declarative, or functional (see Listing 4).

It’s a twist, so look at this revised listing carefully. Begin with HeroRootFunc, which defines a function that creates a sequence of guesses. It returns the last number in the sequence. The method contains two anonymous methods that define how to generate the next number, and when to stop. This expression defines how to generate the next number:

(g) => ((square / g + g) / 2)

This expression defines when to terminate the sequence:

(c, n) => Math.Abs(c - n) > epsilon)

The query expression returns the entire sequence. The Last() extension method returns the last value in the sequence, which is the best answer.

The GenerateSequence() method generates the sequence while the test method returns true. It creates the sequence by evaluating each of the functions used as arguments. These methods generate the sequence and perform the tests. However, they only generate the sequence when someone asks for the final number.

Look again at the implementation of the functional version of Hero’s algorithm. The sequence function generates an infinite sequence. This algorithm would run out of memory if it were imperative. No matter how you do it, you can’t fit an infinite number of elements in memory. It would also take an infinite amount of time. And yet, this works, because the functions defined as parameters are evaluated only when requested. Also, the GenerateSequence() method can be used for other purposes.

Not every problem is best solved using functional approaches, but many problems can be solved more succinctly and more clearly by rethinking parameters and return types. Instead of sending all the data, you can send along a function that can generate the data you need. Sometimes that can give you the answer while requiring much less work on your part.

Highly customizable CAPTCHA verification. Its Free !!!

Posted in Visual Studio Tips and Tricks by priyangpatel on April 7, 2008

Learn and defend your ASP.NET sites against evil bots with this free web control that provides instant, highly customizable CAPTCHA verification.

Its Free !!!

Visual studio Posters!!!!! Very useful

Posted in Visual Studio Tips and Tricks by priyangpatel on April 2, 2008

Hear is the interesting blog post containing links to posters about Microsoft technologies.

It lists posters for Visual studio 2008!!! also with C# and VB. Also it contains posters for developers , business apps and professional apps also.

┬áTo get it — Click Here

Visual studio Posters!!!!! Very useful

Posted in Visual Studio Tips and Tricks by priyangpatel on April 2, 2008

Hear is the interesting blog post containing links to posters about Microsoft technologies.

It lists posters for Visual studio 2008!!! also with C# and VB. Also it contains posters for developers , business apps and professional apps also.

To get it — Click Here

C# 3.0 New language features

Posted in C# 3.0,Visual Studio 2008 by priyangpatel on March 27, 2008

Automatic Properties

If you are a C# developer today, you are probably quite used to writing classes with basic properties like the code-snippet below:

public class Person {

private string _firstName;
private string
_lastName;
private int
_age;

public string FirstName {

get {
return _firstName;
}
set {
_firstName
= value;
}
}

public string LastName {

get {
return _lastName;
}
set {
_lastName
= value;
}
}

public int Age {

get {
return _age;
}
set {
_age
= value;
}
}
}

Note about that we aren’t actually adding any logic in the getters/setters of our properties – instead we just get/set the value directly to a field. This begs the question – then why not just use fields instead of properties? Well – there are a lot of downsides to exposing public fields. Two of the big problems are: 1) you can’t easily databind against fields, and 2) if you expose public fields from your classes you can’t later change them to properties (for example: to add validation logic to the setters) without recompiling any assemblies compiled against the old class.

The new C# compiler that ships in “Orcas” provides an elegant way to make your code more concise while still retaining the flexibility of properties using a new language feature called “automatic properties”. Automatic properties allow you to avoid having to manually declare a private field and write the get/set logic — instead the compiler can automate creating the private field and the default get/set operations for you.

For example, using automatic properties I can now re-write the code above to just be:

public class Person {

public string FirstName {
get; set;
}

public string LastName {
get; set;
}

public int Age {
get; set;
}
}

Or If I want to be really terse, I can collapse the whitespace even further like so:

public class Person {
public string FirstName { get; set; }
public string LastName { get; set; }
public int Age { get; set; }
}

When the C# “Orcas” compiler encounters an empty get/set property implementation like above, it will now automatically generate a private field for you within your class, and implement a public getter and setter property implementation to it. The benefit of this is that from a type-contract perspective, the class looks exactly like it did with our first (more verbose) implementation above. This means that — unlike public fields — I can in the future add validation logic within my property setter implementation without having to change any external component that references my class.

Object Initializers

Types within the .NET Framework rely heavily on the use of properties. When instantiating and using new classes, it is very common to write code like below:

Person person = new Person();
person.FirstName = “Scott”;
person.LastName = “Guthrie”;
person.Age = 32;

Have you ever wanted to make this more concise (and maybe fit on one line)? With the C# and VB “Orcas” compilers you can now take advantage of a great “syntactic sugar” language feature called “object Initializers” that allows you to-do this and re-write the above code like so:

Person person = new Person { FirstName=“Scott”, LastName=“Guthrie”, Age=32 };

The compiler will then automatically generate the appropriate property setter code that preserves the same semantic meaning as the previous (more verbose) code sample above.

In addition to setting simple property values when initializing a type, the object initializer feature allows us to optionally set more complex nested property types. For example, assume each Person type we defined above also has a property called “Address” of type “Address”. We could then write the below code to create a new “Person” object and set its properties like so:


Person person = new Person {
FirstName
= “Scott”,
LastName
= “Guthrie”
Age = 32,
Address
= new Address {
Street
= “One Microsoft Way”,
City
= “Redmond”,
State
= “WA”,
Zip
= 98052
}
}
;

Collection Initializers

Object Initializers are great, and make it much easier to concisely add objects to collections. For example, if I wanted to add three people to a generics-based List collection of type “Person”, I could write the below code:

List people = new List();

people.Add( new Person { FirstName = “Scott”, LastName = “Guthrie”, Age = 32 } );
people.Add( new Person { FirstName = “Bill”, LastName = “Gates”, Age = 50 } );
people.Add( new Person { FirstName = “Susanne”, LastName = “Guthrie”, Age = 32 } );

Using the new Object Initializer feature alone saved 12 extra lines of code with this sample versus what I’d need to type with the C# 2.0 compiler.

The C# and VB “Orcas” compilers allow us to go even further, though, and also now support “collection initializers” that allow us to avoid having multiple Add statements, and save even further keystrokes:

List people = new List {
new Person { FirstName = “Scott”, LastName = “Guthrie”, Age = 32 },
new Person { FirstName = “Bill”, LastName = “Gates”, Age = 50 },
new Person { FirstName = “Susanne”, LastName = “Guthrie”, Age = 32 }
}
;

When the compiler encounters the above syntax, it will automatically generate the collection insert code like the previous sample for us.

New Designer Support in Visual Studio 2008

Posted in Visual Studio 2008 by priyangpatel on March 26, 2008

Following video illustrates new features that are useful for developers as well as designers.

Click here !!!!!!!!

PowerCommands for Visual Studio 2008

Posted in Visual Studio Tips and Tricks by priyangpatel on March 25, 2008

PowerCommands is a set of useful extensions for the Visual Studio 2008 adding additional functionality to various areas of the IDE. The source code is included and requires the VS SDK for VS 2008 to allow modification of functionality or as a reference to create additional custom PowerCommand extensions. Visit the VSX Developer Center at http://msdn.com/vsx for more information about extending Visual Studio.

The Releases page contains download files (MSI installation file, readme document, and source code project).


Below is a list of the included in PowerCommands for Visual Studio 2008 version 1.0. Refer to the Readme document which includes many additional screenshots.

Collapse Projects
This command collapses a project or projects in the Solution Explorer starting from the root selected node. Collapsing a project can increase the readability of the solution. This command can be executed from three different places: solution, solution folders and project nodes respectively.

Copy Class
This command copies a selected class entire content to the clipboard, renaming the class. This command is normally followed by a Paste Class command, which renames the class to avoid a compilation error. It can be executed from a single project item or a project item with dependent sub items.

Paste Class
This command pastes a class entire content from the clipboard, renaming the class to avoid a compilation error. This command is normally preceded by a Copy Class command. It can be executed from a project or folder node.

Copy References
This command copies a reference or set of references to the clipboard. It can be executed from the references node, a single reference node or set of reference nodes.

Paste References
This command pastes a reference or set of references from the clipboard. It can be executed from different places depending on the type of project. For CSharp projects it can be executed from the references node. For Visual Basic and Website projects it can be executed from the project node.

Copy As Project Reference
This command copies a project as a project reference to the clipboard. It can be executed from a project node.

Edit Project File
This command opens the MSBuild project file for a selected project inside Visual Studio. It combines the existing Unload Project and Edit Project commands.

Open Containing Folder
This command opens a Windows Explorer window pointing to the physical path of a selected item. It can be executed from a project item node

Open Command Prompt
This command opens a Visual Studio command prompt pointing to the physical path of a selected item. It can be executed from four different places: solution, project, folder and project item nodes respectively.

Unload Projects
This command unloads all projects in a solution. This can be useful in MSBuild scenarios when multiple projects are being edited. This command can be executed from the solution node.

Reload Projects
This command reloads all unloaded projects in a solution. It can be executed from the solution node.

Remove and Sort Usings
This command removes and sort using statements for all classes given a project. It is useful, for example, in removing or organizing the using statements generated by a wizard. This command can be executed from a solution node or a single project node.
Note: The Remove and Sort Usings feature is only available for C# projects since the C# editor implements this feature as a command in the C# editor (which this command calls for each .cs file in the project).

Extract Constant
This command creates a constant definition statement for a selected text. Extracting a constant effectively names a literal value, which can improve readability. This command can be executed from the code editor by right-clicking selected text.

Clear Recent File List
This command clears the Visual Studio recent file list. The Clear Recent File List command brings up a Clear File dialog which allows any or all recent files to be selected.

Clear Recent Project List
This command clears the Visual Studio recent project list. The Clear Recent Project List command brings up a Clear File dialog which allows any or all recent projects to be selected.

Transform Templates
This command executes a custom tool with associated text templates items. It can be executed from a DSL project node or a DSL folder node.

Close All
This command closes all documents. It can be executed from a document tab.

Multi Targeting Support – Visual Studio 2008

Posted in Visual Studio 2008 by priyangpatel on March 24, 2008

What is Multi-Targeting?

With the past few releases of Visual Studio, each Visual Studio release only supported a specific version of the .NET Framework. For example, VS 2002 only worked with .NET 1.0, VS 2003 only worked with .NET 1.1, and VS 2005 only worked with .NET 2.0.

One of the big changes we are making starting with the VS 2008 release is to support what we call “Multi-Targeting” – which means that Visual Studio will now support targeting multiple versions of the .NET Framework, and developers will be able to start taking advantage of the new features Visual Studio provides without having to always upgrade their existing projects and deployed applications to use a new version of the .NET Framework library.

Now when you open an existing project or create a new one with VS 2008, you can pick which version of the .NET Framework to work with – and the IDE will update its compilers and feature-set to match this. Among other things, this means that features, controls, projects, item-templates, and assembly references that don’t work with that version of the framework will be hidden, and when you build your application you’ll be able to take the compiled output and copy it onto a machine that only has an older version of the .NET Framework installed, and you’ll know that the application will work.

Creating a New Project in VS 2008 that targets .NET 2.0

To see an example of multi-targeting in action on a recent build of VS 2008 Beta 2, we can select File->New Project to create a new application.

Notice below how in the top-right of the new project dialog there is now a dropdown that allows us to indicate which versions of the .NET Framework we want to target when we create the new project. If I keep it selected on .NET Framework 3.5, I’ll see a bunch of new project templates listed that weren’t in previous versions of VS (including support for WPF client applications and WCF web service projects):

If I change the dropdown to target .NET 2.0 instead, it will automatically filter the project list to only show those project templates supported on machines with the .NET 2.0 framework installed:

If I create a new ASP.NET Web Application with the .NET 2.0 dropdown setting selected, it will create a new ASP.NET project whose compilation settings, assembly references, and web.config settings are configured to work with existing ASP.NET 2.0 servers:

When you go to the control Toolbox, you’ll see that only those controls that work on ASP.NET 2.0 are listed:


And if you choose Add->Reference and bring up the assembly reference picker dialog, you’ll see that those .NET class assemblies that aren’t supported on .NET 2.0 are grayed out and can’t be added to the project (notice how the “ok” button is not active below when I have a .NET 3.0 or .NET 3.5 assembly selected):

So why use VS 2008 if you aren’t using the new .NET 3.5 features?

You might be wondering: “so what value do I get when using VS 2008 to work on a ASP.NET 2.0 project versus just using my VS 2005 today?” Well, the good news is that you get a ton of tool-specific value with VS 2008 that you’ll be able to take advantage of immediately with your existing projects without having to upgrade your framework/ASP.NET version. A few big tool features in the web development space I think you’ll really like include:

  1. JavaScript intellisense
  2. Much richer JavaScript debugging
  3. Nested ASP.NET master page support at design-time
  4. Rich CSS editing and layout support within the WYSIWYG designer
  5. Split-view designer support for having both source and design views open on a page at the same time
  6. A much faster ASP.NET page designer – with dramatic perf improvements in view-switches between source/design mode
  7. Automated .SQL script generation and hosting deployment support for databases on remote servers

You’ll be able to use all of the above features with any version of the .NET Framework – without having to upgrade your project to necessarily target newer framework versions. I’ll be blogging about these features (as well as the great new framework features) over the next few weeks.

So how can I upgrade an existing project to .NET 3.5 later?

If at a later point you want to upgrade your project/site to target the NET 3.0 or NET 3.5 version of the framework libraries, you can right-click on the project in the solution explorer and pull up its properties page:

You can change the “Target Framework” dropdown to select the version of the framework you want the project to target. Doing this will cause VS to automatically update compiler settings and references for the project to use the correct framework version. For example, it will by default add some of the new LINQ assemblies to your project, as well as add the new System.Web.Extensions assembly that ships in .NET 3.5 which delivers new ASP.NET controls/runtime features and provides built-in ASP.NET AJAX support (this means that you no longer need to download the separate ASP.NET AJAX 1.0 install – it is now just built-in with the .NET 3.5 setup):

Once you change your project’s target version you’ll also see new .NET 3.5 project item templates show up in your add->new items dialog, you’ll be able to reference assemblies built against .NET 3.5, as well as see .NET 3.5 specific controls show up in your toolbox.

For example, below you can now see the new control (which is an awesome new control that provides the ability to do data reporting, editing, insert, delete and paging scenarios – with 100% control over the markup generated and no inline styles or other html elements), as well as the new control (which enables you to easily bind and work against LINQ to SQL data models), and control show up under the “Data” section of our toolbox:

Note that in addition to changing your framework version “up” in your project properties dialog, you can also optionally take a project that is currently building against .NET 3.0 or 3.5 and change it “down” (for example: move it from .NET 3.5 to 2.0). This will automatically remove the newer assembly references from your project, update your web.config file, and allow you to compile against the older framework (note: if you have code in the project that was written against the new APIs, obviously you’ll need to change it).

What about .NET 1.0 and 1.1?

Unfortunately the VS 2008 multi-targeting support only works with .NET 2.0, .NET 3.0 and .NET 3.5 – and not against older versions of the framework. The reason for this is that there were significant CLR engine changes between .NET 1.x and 2.x that make debugging very difficult to support. In the end the costing of the work to support that was so large and impacted so many parts of Visual Studio that we weren’t able to add 1.1 support in this release.

VS 2008 does run side-by-side, though, with VS 2005, VS 2003, and VS 2002. So it is definitely possible to continue targeting .NET 1.1 projects using VS 2003 on the same machine as VS 2008.

Use explicit casting instead of DataBinder.Eval

Posted in Visual Studio Tips and Tricks by priyangpatel on March 19, 2008

The DataBinder.Eval method uses .NET reflection to evaluate the arguments that are passed in and to return the results. Consider limiting the use of DataBinder.Eval during data binding operations in order to improve ASP.NET page performance.

Consider the following ItemTemplate element within a Repeater control using DataBinder.Eval:

<ItemTemplate>

<tr>

<td><%# DataBinder.Eval(Container.DataItem, “field1”) %><!–td>

<td><%# DataBinder.Eval(Container.DataItem, “field2”) %><!–td>

<!–<tr>

<!–<itemtemplate>

Using explicit casting offers better performance by avoiding the cost of .NET reflection. Cast the Container.DataItem as a DataRowView:

<ItemTemplate>

<tr>

<td><%# ((DataRowView)Container.DataItem)[“field1”] %><!–td>

<td><%# ((DataRowView)Container.DataItem)[“field2”] %><!–td>

<!–tr>

<!–ItemTemplate>

« Previous PageNext Page »