Priyang Patel’s Weblog


Do more with C# & Functional programming – Treat Code as Data

Posted in Visual Studio 2008 by priyangpatel on April 8, 2008

Take advantage of new features in C# 3.0 that let you treat code as data — and save time over more traditional, imperative approaches to programming.

LINQ and C# 3.0 will force you to learn some new idioms in your everyday programming tasks. The idiom that has the most far-reaching consequences is learning to treat code as data. Every time you build a query expression, you’re treating code as data. You’re passing around bits of code or logic as parameters to a method. The methods in the LINQ libraries don’t return data, but delegates that can create the data when you need them. This might feel strange, but it’s not as far-fetched as it sounds. From the outside, it shouldn’t matter whether a data value is cached or is computed from first principles. For example, this bit of code shouldn’t seem scary:

var num = Math.Sin(Math.PI / 4.0);DATA[ */ <!-- Hide from old browsers adserver = "http://ad101com.adbureau.net"; target = "/site=VSM/area=columns/aamsz=336x280/pos=m03/target="; random = Math.round(Math.random() * 100000000); if (!pageNum) var pageNum = Math.round(Math.random() * 100000000); document.write(''); document.write(''); // End Hide --> /*]]> */  

Do you know whether Math.Sin computes the sine of the angle when you call it? In many libraries, numeric methods like these are implemented using a large lookup table. The method simply returns the value in the lookup table, or makes a linear interpolation of the two nearest values if the request angle isn’t in the lookup table.

From your perspective as the one initiating a call to this method, it doesn’t matter. The contract of the method is to return a value corresponding to the input parameter. How it happens isn’t important.

There’s one key point to consider here: I said that how the calculation happens isn’t important. That’s true — unless the calculation depends on some side effect. Sine doesn’t depend on any side effects, so it works no matter what. Other methods aren’t pure functions. For example, this method depends on information beyond the current parameters:

public static decimal CurrentTemperature(int zipCode)

Calling this method at different times with the same input gives different answers. Temperature varies over the course of a day. Substituting the answer (a number that won’t change) for a function (some way to find the current answer) doesn’t work.

There are also quite a few gray areas, where the answer to whether or not you can substitute a function for data or vice versa turns out to be: “It depends.”

Methods as Parameters Are familiar
You’ve worked with methods as parameters before. The List.RemoveAll() method uses a predicate to determine what items to remove from a list. This predicate is a pure function; it depends only on its input:

numbers.RemoveAll((n) => n > 20);

You can also use the ForEach method to print a list of numbers:

numbers.ForEach((n) => Console.WriteLine(n));

However, this bit of code is much more complicated, and it has dependencies related to how the internal algorithm is implemented. For example, this code removes all numbers from a list of integers where the number is greater than its index in the list:

numbers.RemoveAll((n) => n >
numbers.IndexOf(n));

This isn’t a pure function because the output depends on something other than the input. Namely, it depends on the current program’s state. Does RemoveAll() remove each element as it’s processed? That would change the current index of the items. Or, does it perform all the tests and then perform a bulk remove? In which order does it examine the list? First to last? Or last to first? The results of this code will depend on the answers to these questions. (For the record, RemoveAll performs all of the tests, and then removes all of the items. Knowing that doesn’t make this code any more excusable, however.)

There are quite a few new techniques and concepts that you use when you begin to think of your code as data. You’ll be using lambda expressions, deferred execution, closures, higher order functions, and function composition. And, unlike switching to a pure functional language, you’ll likely be mixing your current object-oriented style of programming with this new functional approach, where functions are data. Yes, it’s a steep learning curve, but the ends are worth the effort.

It’s possible to implement every one of the techniques just mentioned in C# 2.0, but you can do so much more easily in C# 3.0 because the syntax is so much cleaner, so I’ll show you how to implement these techniques using C# 3.0’s syntax.

A lambda expression is nothing more than a simplified way to express a method. (In the formal definition, lambda expressions shouldn’t have any side effects, but C# doesn’t enforce this rule.) Consider this statement from earlier in the article:

numbers.ForEach((n) => Console.WriteLine(n));

This is nothing more than a concise way of saying:

numbers.ForEach(delegate(int n )
{
Console.WriteLine(n);
});

Using the lambda syntax, the compiler infers the type of the parameter (an integer) and the type of the return (void in this case). There’s nothing too earth-shattering here, but you must keep the key point in mind: You’re passing a function (in the form of a delegate) to the ForEach method. Essentially, the parameter is describing the algorithm. That’s a fundamental change in terms of how you think about your code.

Deferred execution changes your thinking about code in some important ways (see Listing 1). Now consider the output from a test that runs the code in Listing 1:

2/19/2008 2:18:14 PM
2/19/2008 2:18:23 PM
2/19/2008 2:18:32 PM
2/19/2008 2:18:41 PM
2/19/2008 2:18:50 PM
Do it again
2/19/2008 2:19:08 PM
2/19/2008 2:19:17 PM
2/19/2008 2:19:26 PM
2/19/2008 2:19:35 PM
2/19/2008 2:19:44 PM

I chose to use the DateTime.Now property to generate the sequence because it gives you a clear picture of when operations happen. You can see that there’s a nine-second delay between generating the next sequence item. Also, when you examine the sequence again, you get a totally different sequence of times. The sequence is an algorithm that can create values, but the sequence isn’t the values themselves. Again, you’re now treating code as data. The sequence of values doesn’t exist until you ask for it. Even after you ask for it, the variable sequence still doesn’t contain values. If you examine it again, you see a new sequence of values.

Closures Introduce Bound Variables
One more bit of dry computer science, and then we can move onto the more interesting ramifications of treating code as data. Assume you alter Listing 1 to create different behavior (see Listing 2). Now, examine its output:

2/19/2008 2:34:27 PM
2/19/2008 2:34:27 PM
2/19/2008 2:34:27 PM
2/19/2008 2:34:27 PM
2/19/2008 2:34:27 PM
Do it again
2/19/2008 2:35:21 PM
2/19/2008 2:35:21 PM
2/19/2008 2:35:21 PM
2/19/2008 2:35:21 PM
2/19/2008 2:35:21 PM

What changed? Well, the compiler created a closure containing Current as a bound variable. A closure is a way to inject local variables (or parameters) into the body of the lambda expression. Those local variables are referred to as “bound variables.” The closure contains both the local variables and lambda expressions. The code is implemented in such a way that changes to the bound variable outside of the lambda expression are reflected inside the lambda expression, and vice versa. In this piece of code, you see that the generator returns a sequence containing five copies of the current time. Later, you modify the value of the bound variable (current), outside the lambda. The next time you enumerate the sequence, you get five copies of the newer version of the variable.

Putting This to Work
All of this is wonderful, but why should you care? Using this kind of algorithm can help you create snippets of code to reuse later. Think about how many times you’ve written code like this:

var currentCustmers =
From c in customerList
Where c.Orders.Count > 0
Select c;

Because that variable contains code, not just data, you’re actually creating a bit of logic that gives you the current customer list when requested, rather than when the logic executed originally. Instead of copying that code everywhere, you need only access that code when you need it.

Another advantage is that you can work with sequences that are far too large to examine or process on your local machine. You can chain these sequence operators together. When you do that, you’re not making new copies of data. You’re manipulating the algorithm and the functions, and that new set of functions provides a new answer when you examine it.

You can see this at work by converting an ancient numeric algorithm from imperative to declarative. You can find full source for this conversion in the online code, but I’ll highlight the key points in this article’s inline code. Hero of Alexandria’s algorithm for finding square roots lets you find the square root of any number S, by starting with a guess G (S-1 works fine). The next guess is computed using the formula ((S / G + G) / 2). For example, to find the square root of 2, you start with 1 as the guess. The next guess is 1.5 ((2 / 1 + 1) / 2). The next guess is 1.416 ((2 / 1.5 + 1.5) / 2). After enough iterations, the answer converges on the square root.

You begin with a classic C# imperative implementation of Hero’s algorithm (see Listing 3). Next, you make a set of changes and re-implement this algorithm to make it more declarative, or functional (see Listing 4).

It’s a twist, so look at this revised listing carefully. Begin with HeroRootFunc, which defines a function that creates a sequence of guesses. It returns the last number in the sequence. The method contains two anonymous methods that define how to generate the next number, and when to stop. This expression defines how to generate the next number:

(g) => ((square / g + g) / 2)

This expression defines when to terminate the sequence:

(c, n) => Math.Abs(c - n) > epsilon)

The query expression returns the entire sequence. The Last() extension method returns the last value in the sequence, which is the best answer.

The GenerateSequence() method generates the sequence while the test method returns true. It creates the sequence by evaluating each of the functions used as arguments. These methods generate the sequence and perform the tests. However, they only generate the sequence when someone asks for the final number.

Look again at the implementation of the functional version of Hero’s algorithm. The sequence function generates an infinite sequence. This algorithm would run out of memory if it were imperative. No matter how you do it, you can’t fit an infinite number of elements in memory. It would also take an infinite amount of time. And yet, this works, because the functions defined as parameters are evaluated only when requested. Also, the GenerateSequence() method can be used for other purposes.

Not every problem is best solved using functional approaches, but many problems can be solved more succinctly and more clearly by rethinking parameters and return types. Instead of sending all the data, you can send along a function that can generate the data you need. Sometimes that can give you the answer while requiring much less work on your part.

C# 3.0 New language features

Posted in C# 3.0,Visual Studio 2008 by priyangpatel on March 27, 2008

Automatic Properties

If you are a C# developer today, you are probably quite used to writing classes with basic properties like the code-snippet below:

public class Person {

private string _firstName;
private string
_lastName;
private int
_age;

public string FirstName {

get {
return _firstName;
}
set {
_firstName
= value;
}
}

public string LastName {

get {
return _lastName;
}
set {
_lastName
= value;
}
}

public int Age {

get {
return _age;
}
set {
_age
= value;
}
}
}

Note about that we aren’t actually adding any logic in the getters/setters of our properties – instead we just get/set the value directly to a field. This begs the question – then why not just use fields instead of properties? Well – there are a lot of downsides to exposing public fields. Two of the big problems are: 1) you can’t easily databind against fields, and 2) if you expose public fields from your classes you can’t later change them to properties (for example: to add validation logic to the setters) without recompiling any assemblies compiled against the old class.

The new C# compiler that ships in “Orcas” provides an elegant way to make your code more concise while still retaining the flexibility of properties using a new language feature called “automatic properties”. Automatic properties allow you to avoid having to manually declare a private field and write the get/set logic — instead the compiler can automate creating the private field and the default get/set operations for you.

For example, using automatic properties I can now re-write the code above to just be:

public class Person {

public string FirstName {
get; set;
}

public string LastName {
get; set;
}

public int Age {
get; set;
}
}

Or If I want to be really terse, I can collapse the whitespace even further like so:

public class Person {
public string FirstName { get; set; }
public string LastName { get; set; }
public int Age { get; set; }
}

When the C# “Orcas” compiler encounters an empty get/set property implementation like above, it will now automatically generate a private field for you within your class, and implement a public getter and setter property implementation to it. The benefit of this is that from a type-contract perspective, the class looks exactly like it did with our first (more verbose) implementation above. This means that — unlike public fields — I can in the future add validation logic within my property setter implementation without having to change any external component that references my class.

Object Initializers

Types within the .NET Framework rely heavily on the use of properties. When instantiating and using new classes, it is very common to write code like below:

Person person = new Person();
person.FirstName = “Scott”;
person.LastName = “Guthrie”;
person.Age = 32;

Have you ever wanted to make this more concise (and maybe fit on one line)? With the C# and VB “Orcas” compilers you can now take advantage of a great “syntactic sugar” language feature called “object Initializers” that allows you to-do this and re-write the above code like so:

Person person = new Person { FirstName=“Scott”, LastName=“Guthrie”, Age=32 };

The compiler will then automatically generate the appropriate property setter code that preserves the same semantic meaning as the previous (more verbose) code sample above.

In addition to setting simple property values when initializing a type, the object initializer feature allows us to optionally set more complex nested property types. For example, assume each Person type we defined above also has a property called “Address” of type “Address”. We could then write the below code to create a new “Person” object and set its properties like so:


Person person = new Person {
FirstName
= “Scott”,
LastName
= “Guthrie”
Age = 32,
Address
= new Address {
Street
= “One Microsoft Way”,
City
= “Redmond”,
State
= “WA”,
Zip
= 98052
}
}
;

Collection Initializers

Object Initializers are great, and make it much easier to concisely add objects to collections. For example, if I wanted to add three people to a generics-based List collection of type “Person”, I could write the below code:

List people = new List();

people.Add( new Person { FirstName = “Scott”, LastName = “Guthrie”, Age = 32 } );
people.Add( new Person { FirstName = “Bill”, LastName = “Gates”, Age = 50 } );
people.Add( new Person { FirstName = “Susanne”, LastName = “Guthrie”, Age = 32 } );

Using the new Object Initializer feature alone saved 12 extra lines of code with this sample versus what I’d need to type with the C# 2.0 compiler.

The C# and VB “Orcas” compilers allow us to go even further, though, and also now support “collection initializers” that allow us to avoid having multiple Add statements, and save even further keystrokes:

List people = new List {
new Person { FirstName = “Scott”, LastName = “Guthrie”, Age = 32 },
new Person { FirstName = “Bill”, LastName = “Gates”, Age = 50 },
new Person { FirstName = “Susanne”, LastName = “Guthrie”, Age = 32 }
}
;

When the compiler encounters the above syntax, it will automatically generate the collection insert code like the previous sample for us.

New Designer Support in Visual Studio 2008

Posted in Visual Studio 2008 by priyangpatel on March 26, 2008

Following video illustrates new features that are useful for developers as well as designers.

Click here !!!!!!!!

Multi Targeting Support – Visual Studio 2008

Posted in Visual Studio 2008 by priyangpatel on March 24, 2008

What is Multi-Targeting?

With the past few releases of Visual Studio, each Visual Studio release only supported a specific version of the .NET Framework. For example, VS 2002 only worked with .NET 1.0, VS 2003 only worked with .NET 1.1, and VS 2005 only worked with .NET 2.0.

One of the big changes we are making starting with the VS 2008 release is to support what we call “Multi-Targeting” – which means that Visual Studio will now support targeting multiple versions of the .NET Framework, and developers will be able to start taking advantage of the new features Visual Studio provides without having to always upgrade their existing projects and deployed applications to use a new version of the .NET Framework library.

Now when you open an existing project or create a new one with VS 2008, you can pick which version of the .NET Framework to work with – and the IDE will update its compilers and feature-set to match this. Among other things, this means that features, controls, projects, item-templates, and assembly references that don’t work with that version of the framework will be hidden, and when you build your application you’ll be able to take the compiled output and copy it onto a machine that only has an older version of the .NET Framework installed, and you’ll know that the application will work.

Creating a New Project in VS 2008 that targets .NET 2.0

To see an example of multi-targeting in action on a recent build of VS 2008 Beta 2, we can select File->New Project to create a new application.

Notice below how in the top-right of the new project dialog there is now a dropdown that allows us to indicate which versions of the .NET Framework we want to target when we create the new project. If I keep it selected on .NET Framework 3.5, I’ll see a bunch of new project templates listed that weren’t in previous versions of VS (including support for WPF client applications and WCF web service projects):

If I change the dropdown to target .NET 2.0 instead, it will automatically filter the project list to only show those project templates supported on machines with the .NET 2.0 framework installed:

If I create a new ASP.NET Web Application with the .NET 2.0 dropdown setting selected, it will create a new ASP.NET project whose compilation settings, assembly references, and web.config settings are configured to work with existing ASP.NET 2.0 servers:

When you go to the control Toolbox, you’ll see that only those controls that work on ASP.NET 2.0 are listed:


And if you choose Add->Reference and bring up the assembly reference picker dialog, you’ll see that those .NET class assemblies that aren’t supported on .NET 2.0 are grayed out and can’t be added to the project (notice how the “ok” button is not active below when I have a .NET 3.0 or .NET 3.5 assembly selected):

So why use VS 2008 if you aren’t using the new .NET 3.5 features?

You might be wondering: “so what value do I get when using VS 2008 to work on a ASP.NET 2.0 project versus just using my VS 2005 today?” Well, the good news is that you get a ton of tool-specific value with VS 2008 that you’ll be able to take advantage of immediately with your existing projects without having to upgrade your framework/ASP.NET version. A few big tool features in the web development space I think you’ll really like include:

  1. JavaScript intellisense
  2. Much richer JavaScript debugging
  3. Nested ASP.NET master page support at design-time
  4. Rich CSS editing and layout support within the WYSIWYG designer
  5. Split-view designer support for having both source and design views open on a page at the same time
  6. A much faster ASP.NET page designer – with dramatic perf improvements in view-switches between source/design mode
  7. Automated .SQL script generation and hosting deployment support for databases on remote servers

You’ll be able to use all of the above features with any version of the .NET Framework – without having to upgrade your project to necessarily target newer framework versions. I’ll be blogging about these features (as well as the great new framework features) over the next few weeks.

So how can I upgrade an existing project to .NET 3.5 later?

If at a later point you want to upgrade your project/site to target the NET 3.0 or NET 3.5 version of the framework libraries, you can right-click on the project in the solution explorer and pull up its properties page:

You can change the “Target Framework” dropdown to select the version of the framework you want the project to target. Doing this will cause VS to automatically update compiler settings and references for the project to use the correct framework version. For example, it will by default add some of the new LINQ assemblies to your project, as well as add the new System.Web.Extensions assembly that ships in .NET 3.5 which delivers new ASP.NET controls/runtime features and provides built-in ASP.NET AJAX support (this means that you no longer need to download the separate ASP.NET AJAX 1.0 install – it is now just built-in with the .NET 3.5 setup):

Once you change your project’s target version you’ll also see new .NET 3.5 project item templates show up in your add->new items dialog, you’ll be able to reference assemblies built against .NET 3.5, as well as see .NET 3.5 specific controls show up in your toolbox.

For example, below you can now see the new control (which is an awesome new control that provides the ability to do data reporting, editing, insert, delete and paging scenarios – with 100% control over the markup generated and no inline styles or other html elements), as well as the new control (which enables you to easily bind and work against LINQ to SQL data models), and control show up under the “Data” section of our toolbox:

Note that in addition to changing your framework version “up” in your project properties dialog, you can also optionally take a project that is currently building against .NET 3.0 or 3.5 and change it “down” (for example: move it from .NET 3.5 to 2.0). This will automatically remove the newer assembly references from your project, update your web.config file, and allow you to compile against the older framework (note: if you have code in the project that was written against the new APIs, obviously you’ll need to change it).

What about .NET 1.0 and 1.1?

Unfortunately the VS 2008 multi-targeting support only works with .NET 2.0, .NET 3.0 and .NET 3.5 – and not against older versions of the framework. The reason for this is that there were significant CLR engine changes between .NET 1.x and 2.x that make debugging very difficult to support. In the end the costing of the work to support that was so large and impacted so many parts of Visual Studio that we weren’t able to add 1.1 support in this release.

VS 2008 does run side-by-side, though, with VS 2005, VS 2003, and VS 2002. So it is definitely possible to continue targeting .NET 1.1 projects using VS 2003 on the same machine as VS 2008.

Utility to Convert Text / HTML to a Visual Basic String

Posted in Visual Studio 2008 by priyangpatel on March 18, 2008

AJAX opens many interesting new doors in terms of how we can tailor the user experience to the customers needs and how we can display content based on any number of state context.

This sometimes means fetching and manipulating HTML or XML in our server side code and sending it to the browser as execution time via an AJAX request.

This is the best utility which convert your Text/ HTML code in String that we can utilize at server side code.

[Just Click Here to get your copy – it’s free.]

Text/ Html To VB string

Free Microsoft Press E- Books Offer !!!!!!

Posted in Visual Studio 2008 by priyangpatel on March 18, 2008

Get visual studio 2008 Ebooks, you can explore first chapter’s online.

Get it its free!!! Click Here

Connect Apps with WCF

Posted in Visual Studio 2008 by priyangpatel on March 18, 2008

Learn when and how to utilize Windows Communication Foundation to develop and maintain your communications layer when creating a loosely coupled, scalable, interoperable services-oriented application.

Technology Toolbox: C#, Other: Windows Communication FoundationWindows Communication Foundation (WCF) is a powerful new technology for building services-oriented architecture (SOA)-based applications. The usefulness of WCF goes well beyond large-scale enterprise SOAs. WCF can be used even for simple scenarios where all you need is connectivity between two apps on the same machine or across processes on different machines, even if you haven’t adopted the full architectural approach of SOA.

In this article, I’ll discuss some best practices and things to keep in mind when applying WCF in the real world. I’ll start with a quick review of the basics of connecting applications with WCF, and then focus on several areas where you have to make hard choices between creating an easy-to-develop-and-maintain communications layer or creating a loosely coupled, scalable, interoperable SOA-based application. I’ll also emphasize a collection of best practices, describing the context of where those practices would apply and why they’re important.

You can use WCF to get two different chunks of code talking to each other across a wide variety of connectivity scenarios. With WCF, you can create a full-blown SOA-based application that communicates across the open Internet with another service written in a completely different technology. You can also use it to get two classes in the same assembly in the same process talking to one another. In general, you should consider using WCF for any new code where you need to cross a process boundary, and even in some scenarios for connecting decoupled objects (as services) within the same process. Basically, you should forget that .NET Remoting, ASP.NET Web Services, and Enterprise Services exist (except for maintaining legacy code written in those technologies), and focus on WCF for all your connectivity needs.

Connecting two pieces of code with WCF requires that you implement five elements on the server side.

First, you need a service contract. This defines what operations you expose at the service boundary, as well as the data that is passed through those operations.

Second, you need data contracts for complex types passed through the service contract operations. These contracts define the shape of the sent data so that it can be consumed or produced by the client.

Third, you need a service implementation. This provides the functional code that answers the incoming service messages and decides what to do with them.

The fourth required element is a service configuration. This specifies how the service is exposed, in terms of its address, binding, and contract. Wrapped up in the binding are all the gory details of what network protocol you’re using, the encoding of the message, the security mechanisms being used, and whether you’re using reliability, transactions, or several other features that WCF supports.

Finally, you must have a service host. This is the process where the service runs. Your service host can be any .NET process if you want to self-host it, IIS, or Windows Activation Service (WAS) for Windows Vista and Windows Server 2008.

On the client side, you need to implement three different technologies to make service calls with WCF (whether the service is a WCF service or one implemented in some other technology): a service contract definition that matches what the server uses, a data contract definition that matches what the server is using, and a proxy that can form the messages to send to the service and process returned messages.

There are many different ways that you can use WCF, depending on the scenario and requirements; however, there are a number of best practices you should keep in mind as you design your WCF back-end services.

Use Service Boundary Best Practices
Layering is a good idea in any application. You should already be familiar with the benefits of separating functionality into a presentation layer, business layer, and data access layer. Layering provides separation of concerns and better factoring of code, which gives you better maintainability and the ability to split layers out into separate physical tiers for scalability. In the same way that data access code should be separated into its own layer that focuses on translating between the world of the database and the application domain, services should be placed in a separate service layer that focuses on translating between the services-oriented external world and the application domain (see Figure 1).

Having a service layer implies that you’ve put your service definitions into a separate class library, and host that library in your service host environment. The service layer dispatches calls into the business layer to get the work of the service operation done.

WCF supports putting your service contract and operation contract attributes directly in the implementation class, but you should always avoid doing so. Having an interface definition that clearly defines what the service boundary looks like, separate from the implementation of that service, is preferable.

For example, you might implement a simple service contract definition like this:


[ServiceContract()]
public interface IProductService
{
 [OperationContract()]
 List GetProducts();
}


An important aspect of SOA design is hiding all the details of the implementation behind the service boundary. This includes revealing or dictating what particular technology was used. It also means you shouldn’t assume the consuming application supports a complex object model. Part of the service boundary definition is the data contract definition for the complex types that will be passed as operation parameters or return values.

For maximum interoperability and alignment with SOA principles, you should not pass .NET-specific types, such as DataSets or Exceptions, across the service boundary. You should stick to fairly simple data structure objects such as classes with properties and backing member fields. You can pass objects that have nested complex types such as a Customer with an Order collection, but you shouldn’t make any assumptions about the consumer being able to support object-oriented constructs like interfaces or base classes for interoperable Web services.

However, if you’re using WCF only as a new remoting technology to get two different pieces of code to talk to each other across processes, with no expectation or requirement for others to write consuming applications, then you can pass whatever you want as a data contract. You just have to make sure that those types are marked appropriately as data contracts or are serializable types. Generally speaking, you face various challenges when passing DataSets through WCF services, so you should avoid doing so, except for simple scenarios. If you do want to pursue using DataSets with WCF, you should definitely use typed DataSets and try to stick to individualy typed DataTables as parameters or return types.

Note that the simple service contract described previously is defined in terms of List. WCF is designed to flatten enumerable collections into arrays at the service boundary. Rather than limiting interoperability, this feature makes your life easier when populating and using your collections in the service and business layers.

For example, consider this data contract definition for the ConsumerProduct type:


[DataContract()]
public class ConsumerProduct
{
    private int m_ProductID;
    private string m_ProductName;
    private double m_UnitPrice;

    [DataMember()]
    public int ProductID
    {
       get { return m_ProductID; }
       set { m_ProductID = value; }
    }

    [DataMember()]
    public string ProductName
    {
       get {return m_ProductName; }
       set{ m_ProductName = value; }
    }

    [DataMember()]
    public double UnitPrice
    {
       get {return m_UnitPrice; }
       set { m_UnitPrice = value; }
    }
}


Use Per-Call Instancing
Another important best practice to adhere to: Services should use per-call instancing as a default. WCF supports three instancing modes for services: Per-Call, Per-Session, and

Single. Per-Call creates a new instance of the service implementation class for each operation call that comes into the service and disposes of it when the service operation is

complete. This is the most scalable and robust option, so it’s unfortunate that the WCF product team decided to change from this being the default instancing mode shortly before release of the product. Per-Session allows a single client to keep a service instance alive on the server as long as the client keeps making calls into that instance. This allows you to store state in member variables of the service instance between calls and have an ongoing, stateful conversation between a client application and a server object. However, it has several messy side effects, including the fact that consuming memory on a server when it isn’t actively in use is bad for scalability. It also gets messy when transactions are involved. Single(ton) allows you to have all calls from all clients routed to a single service instance on the back-end. This allows you to use that single point of entry as a gatekeeper if your requirements dictate the need for such a thing. Using the Singleton mode is even worse for scalability because all calls into the singleton instance are serialized (one caller at a time) by default. Single mode also has some of the same side effects as sessionful services. That said, there are specific scenarios where using Per-Session or Single makes sense.

You should try to design your services as Per-Call initially because it’s the cleanest, most scalable, and safest option, and only talk yourself into using Per-Session or Singleton if you understand the implications of using those modes.

This code declares a Per-Call service for the service contract illustrated previously:


[ServiceBehavior(
 InstanceContextMode=
 InstanceContextMode.PerCall)]
public class ProductService :
 IProductService
{
 List
    IProductService.GetProducts()
 {
    ...
 }
}


I recommend checking out “Programming WCF Services” by Juval Löwy (O’Reilly, 2007) for a better understanding of the differences between–and implications of–Per-Session and Single instancing modes (see Additional Resources).

Deal with Exceptions
If an unhandled exception reaches the service boundary, WCF will catch that exception and return a SOAP fault to the caller. By default, that fault is opaque and doesn’t reveal any details about what the real problem was on the back-end. This is good and aligns with SOA design principles. You should only reveal information to the client that you choose to expose, and avoid exposing details like stack traces and the like that would go with a normal exception delivery.

However, for most bindings in WCF, WCF will also fault the channel when an unhandled exception reaches the service boundary, which usually means you cannot make subsequent calls through the same proxy on the client side, and you will have to establish a new connection to the server. As a result, one of your responsibilities in designing a good service layer is to catch all exceptions and throw a FaultException exception to WCF in cases when the service could recover from the exception and answer subsequent calls without causing follow-on problems.

FaultException is a special type that still propagates as an Exception on the .NET call stack, but is interpreted differently by the WCF layer that performs the messaging. Think of it as a handled exception that you’re throwing to WCF, as opposed to an unhandled exception that propagates into WCF on its own without your service intervening. You can pass non-detailed information to the client about what the problem was by using the Reason property on FaultException. If a FaultException is caught by WCF, it will still package it as a SOAP fault message, but it will not fault the channel. That means you can instruct the client to handle the resulting error and keep calling the service without needing to open a new connection.

There are many different scenarios for dealing with exceptions in WCF, as well as several ways to hook up your exception handling in the service layer.

Some basic rules you should follow include: Catch unhandled exceptions and throw a FaultException if your service is able to recover and answer subsequent calls without causing further problems (which it should be able to do if you designed it as a per-call service); don’t send exception details to the client except for debugging purposes during development; and pass non-detailed exception information to the client using the Reason property on FaultException.

This service method captures a particular exception type and returns non-detailed information about the problem to the client through FaultException:


List
 IProductService.GetProducts()
{
 try
 {
    ...
 }
 catch (SqlException ex)
 {
    throw new FaultException(
    "The service could not connect to the data store",
       // details argument
    "Unknown error");
       // Reason
 }
}


The T type parameter can be any type you want, but for interoperability, you will probably want to stick to a simple data structure that is marked as a data contract or types that are serializable.

For small-scale services where you will be the only one writing the client applications, you can choose to wrap the caught exception type in a FaultException (for example, FaultException) so that the client gets the full details of the exception, but you don’t fault the channel.

Michele Leroux Bustamante’s book, “Learning WCF” (O’Reilly, 2007), provides good coverage of exception handling in WCF (see Additional Resources).

Choose an Appropriate Service Host
Windows Communication Foundation has three options for hosting your services. You can be self-hosted (run your services in any .NET application that you design), IIS-hosted, or Windows Activation Service (WAS)-hosted.

Self-hosting gives you the most flexibility because you set up the hosting environment yourself. This means you can access and configure your service host programmatically and do other things like hook up to service host events for operations monitoring and control. However, self-hosting puts the responsibility for process management and other configuration options squarely on your shoulders.

This code describes a simple Windows Service self-hosting setup:


public partial class
 MyNTServiceHost : ServiceBase
{
 ServiceHost m_ProductServiceHost = null;
 public MyNTServiceHost()
 {
    InitializeComponent();
 }
 protected override void OnStart(string[] args)
 {
    m_ProductServiceHost = new
       ServiceHost(typeof(ProductService));
    m_ProductServiceHost.Open();
 }

 protected override void OnStop()
 {
    if (m_ProductServiceHost != null)
       m_ProductServiceHost.Close();
 }
}


IIS-hosting allows you to deploy your services to IIS by dropping the DLLs into the \Bin directory and putting .SVC files as the service addressable endpoints. You gain the kernel-level request routing of IIS, the IIS management consoles for configuring the hosting environment, IIS’s ability to start and recycle worker processes, and more. The big downside to IIS-hosting is that you’re stuck with HTTP-based bindings only.

WAS is a part of IIS 7 (Windows Vista and Windows Server 2008) and gives you the hosting model of IIS. However, it also allows you to expose services using protocols other than HTTP, such as TCP, Named Pipes, and MSMQ.

WAS is almost always the best choice if you’re targeting newer platforms. If you’re exposing services outside your network, you will probably be using HTTP protocols anyway, so IIS-hosting is usually best for externally exposed services.

If WAS-hosting isn’t an option for services running inside the intranet, you should plan on self-hosting to take advantage of other (faster and more capable) protocols inside the firewall, as well as to give you more flexibility when configuring your environment programmatically.

Use Callbacks Properly
WCF includes a capability to call back a client to return data to it asynchronously or as a form of event notification. This is handy when you want to signal the client that a long-running operation has completed on the back-end, or to notify a client of changing data that affects it. To do this, you must define a callback contract that is paired with your service contract. The client doesn’t have to expose that callback contract publicly as a service to the outside world, but the service can use it to call back to the client after an initial call has been made into the service by the client. This code creates a service contract definition with a paired callback contract:


[ServiceContract(CallbackContract
 =typeof(IProductServiceCallback))]
public interface IProductService
{
 [OperationContract()]
 List
    GetProducts();

 [OperationContract()]
 void SubscribeProductChanges();

 [OperationContract()]
 void UnsubscribeProductChanges();
}

public interface IProductServiceCallback
{
 [OperationContract()]
 void ProductChanged(ConsumerProduct product);
}


If you intend to use callbacks, it’s a good idea to expose a subscribe/unsubscribe API as part of the service contract. To perform callbacks, the service must capture a callback context from an incoming call, and then hold that context in memory until the point when a call is made back to the client. The client also needs to create an object that implements the callback interface to receive the incoming calls from the service, and must keep that object and the same proxy alive as long as callbacks are expected. This sets up a fairly tightly coupled communications pattern between the client and the service (including object lifetime dependencies), so it’s a good idea to let the client control exactly when that tight coupling begins and ends through explicit service calls to start and end the conversation.

The biggest limitations of callbacks are that they don’t scale   and might not work in interop scenarios. The scaling problem is related to the fact that the service must hold a reference to the client in memory and perform the callbacks on that reference. Listing 1 contains code that illustrates how to capture, store, and change notification to the client through a callback reference for a service.

You also face an interop problem related to the fact that the callback mechanism in WCF is based on a proprietary technology with no ratified standard. It’s proprietary, but at least it’s expressed through things in the SOAP message that could be consumed and used properly by other technologies. However, if interop is a part of your requirements, you should avoid callbacks.

The alternatives are to set up a polling API where a client can come and ask for changes at appropriate intervals, or you can set up a publish-subscribe middleman service to act as a broker for subscriptions and publications to avoid coupling the client and the service and to keep them scalable. You can find an example of how to do this in the appendix of “Programming WCF Services” (see Additional Resources).

Use Maintainable Proxy Code
One of the tenets of service orientation is that you should share schema and contract, not types. So, to keep a client as decoupled as possible from a service definition, you shouldn’t share any types between the service and the client that aren’t part of the service boundary. However, if you’re writing both the service and the client, you don’t want to have to maintain two type definitions for the same thing.

That said, there’s no crime in referencing an assembly on the client side that’s also used by the service to access the .NET type definitions of the service contract and data contracts. You just have to make sure that your service is usable by clients if they don’t have access to the shared assembly, rather than having to regenerate those types on the client side from the metadata. To do so, just define these in a separate assembly from the service implementation so that you can reference them from both sides without introducing additional coupling. If you do this, keep in mind that you’re introducing a little more coupling between service and client in the interest of productivity and speed of development and maintenance.

As mentioned in the earlier section on faults, an unhandled exception delivered as a fault will also fault the communications channel with most of the built-in bindings. When the fault is received in a WCF client, it’s raised as a FaultException if it wasn’t specifically thrown as a FaultException on the service side. Because it’s not consistent across all bindings, and because your service code and client code should be decoupled from whatever particular binding you use, the only safe thing to do on the client side if a service call throws an exception is to assume the worst and avoid re-using the proxy. In fact, even disposing of or closing the proxy can result in a subsequent exception. This means you should wrap calls to a service in try/catch blocks and replace the proxy instance with a new one in the catch block:


public class MyClient
{
 ProductServiceClient m_Proxy =
    new ProductServiceClient();

 private void OnGetProducts(
    object sender, RoutedEventArgs e)
 {
    try
    {
       DataContext = m_Proxy.GetProducts();
    }
    catch (Exception ex)
    {
       m_Proxy = new ProductServiceClient();
    }
 }
}


That’s about it for taking advantage of WCF in the real world. I’ve covered a lot of ground, including many of the constructs you will need to define WCF services and clients. I’ve also discussed how best practices and real-world considerations affect the choices you make for those constructs. Of course, this is just a start; the real way to absorb these lessons is to sit down at a keyboard and give them a go. To that end, the sample code for this article includes a full implementation of a service and client that calls that service with all the basic code constructs. Feel free to give the code a whirl and experiment with it to get a better feel for how the technologies in this article fit together.

Free Microsoft Press E- Books Offer !!!!!!

Posted in Visual Studio 2008 by priyangpatel on March 17, 2008
Tags:

Get visual studio 2008 Ebooks, you can explore first chapter’s online.

Get it its free!!! Click Here

Utility to Convert Text / HTML to a Visual Basic String

Posted in Visual Studio 2008 by priyangpatel on March 14, 2008
Tags:

AJAX opens many interesting new doors in terms of how we can tailor the user experience to the customers needs and how we can display content based on any number of state context.

This sometimes means fetching and manipulating HTML or XML in our server side code and sending it to the browser as execution time via an AJAX request.

This is the best utility which convert your Text/ HTML code in String that we can utilize at server side code.

[Just Click Here to get your copy – it’s free.]

Text/ Html To VB string

Inside Functional Programming

Posted in Visual Studio 2008 by priyangpatel on March 13, 2008
Tags:

Take advantage of functional programming techniques like Filter, Map, and Reduce in your day-to-day business apps.
TECHNOLOGY TOOLBOX: C#, SQL Server 2005 Compact Edition Runtime, Visual Studio 2005 Standard Edition SP1 or Higher, SQL Server Management Studio [Express] SP2 As you start using C# 3.0, you’ll find yourself diving deeper into the concepts created for functional programming. This is the academic research done for languages like LISP, ML, Haskell, and others that have only a small installed base in professional programming environments. In a way, that’s too bad, because many of these concepts provide elegant solutions to many of the problems you need to solve every day.

The incorporation of these functional programming techniques into .NET is one of the reasons why I’m excited about the release of C# 3.0 and Visual Studio (VS) 2008. You can use these same concepts in your favorite language. That’s important for a couple of reasons. First, you’re more familiar with the syntax of your preferred language and that makes it much easier to continue being productive. Second, you can mix these functional programming concepts alongside more traditional imperative algorithms.Of course, you also give something up by staying in your familiar environment. Doing things the way you’re accustomed to doing them often means that you’re slow to try and adapt new techniques. Other times, you might not get the full advantage of a given technique because you’re using it in your familiar context, and that context doesn’t take full advantage of the technique.

This article will familiarize you with three of the most common functional programming elements: the higher order functions Filter, Map, and Reduce. You’re probably already familiar with the general concepts–if not the specific terms–so much of the research associated with functional programming will be fairly accessible.

Define a Value’s Removal
You’ve already used the concept of Filter, even in C# 2.0. List<T> contains a RemoveAll() method. RemoveAll takes a delegate and that delegate determines which values should be removed from your collection. For example, this code removes all integers from the someNumbers collections that are divisible by 3:

­
List<int> someNumbers = new List<int>
   { 1, 2, 3, 4, 5, 6, 7, 8, 9,
      10, 11, 12, 13, 14, 15 };

someNumbers.RemoveAll(
   delegate(int num)
      { return num % 3 == 0; });

C# 3.0 provides a more concise way to express that same concept:

someNumbers.RemoveAll(num => num % 3 == 0);

That’s a filter. The filter defines when to remove a value. Let’s take a small detour into vocabulary land. A Higher-Order function is simply a function that takes a function as a parameter, or returns a function, or both. Both of these samples fit that description. In both cases, the parameter to RemoveAll() is the function that describes what members should be removed from the set. Internally, the RemoveAll() method calls your function once on every item in the sequence. When there’s a match, that item gets removed.

In C# 3.0 and Language Integrated Query (LINQ) syntax, the Where clause defines the filter. In the case of Where, the filter expression might not be evaluated as a delegate. LINQ to SQL processes the expression tree representation of your query. By examining the expression tree, LINQ to SQL can create a T-SQL representation of your query and execute the query using the database engine, rather than invoking the delegate. Any provider that implements IQueryable<T> or IQueryable will parse the expression tree and translate it into the best format for the provider.

A filter is the simplest form of a higher order function. Its input is a sequence, and its output is a proper subset of the input sequence. The concept is already familiar to you, and it shows the fundamental concept of passing a function as a parameter to another function.

C# 2.0 and the corresponding methods in the .NET framework did not fully embrace the concepts of functional programming. You can see that in the way RemoveAll is implemented. It’s a member of the List<T> class, and it modifies that object. A true Filter takes its input sequence as a parameter and returns the output sequence; it doesn’t modify the state of its input sequence.

Return a New Sequence
Map is the second major building block you’ll see in functional programming. Map returns a new sequence computed from an input sequence. Similar to Filter, Map takes a function as one of its parameters. That function transforms a single input element into the corresponding output element.

As with Filter, there’s similar functionality in the .NET base library. List<T>.ConvertAll produces a new list of elements using a delegate you define that transforms a single input element into the corresponding output element. Here, the conversion computes the square of every number:

List<int> someNumbers = new List<int>
   { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };
List<int> squares = someNumbers.ConvertAll(
   delegate(int num)
{
   return num * num;
});
squares.ForEach(delegate(int num)
   { Console.WriteLine(num); });

In C# 3.0, lambda syntax makes this more concise:

List<int> squares =
   someNumbers.ConvertAll(num => num * num);

Filter is built in to the query syntax added in C# 3.0:

IEnumerable<int> squares = from num in someNumbers
   select num * num;

Of course, you probably noticed that quick change in the last code snippet. The last version returned an IEnumerable<int> rather than a List<int>. The C# 3.0 versions of these methods operate on sequences, and aren’t members of any particular type.

There’s nothing that says the output sequence has to be of the same type as the input sequence. This method returns a list of formatted strings computed from a set of input numbers:

List<int> someNumbers =
   new List<int>
   { 1, 2, 3, 4, 5, 6, 7, 8,
      9, 10, 11, 12, 13, 14, 15 };
List<string> formattedNumbers =
   someNumbers.ConvertAll(
   delegate(int num)
{
   return string.Format("{0:D6}", num);
});
formattedNumbers.ForEach(
   delegate(string num)
   { Console.WriteLine(num); });

Of course, the same method gets simplified using C# 3.0:

List<string> formattedNumbers =
   someNumbers.ConvertAll
   (num => string.Format("{0:D6}", num));
And can be further simplified using the query syntax:

IEnumerable<string> formattedNumbers =
   from num in someNumbers
   select string.Format("{0:D6}", num);

As with Filter, you’ve used functionality like Map before. You might not have known what it was called, or where its computer science roots lie. Filter is nothing more than a convention where you write a method that converts one sequence type into another, and the specifics of that conversion are coded into a second method. That second method is then passed on as a parameter to the Map function.

One Function to Rule Them All: Reduce
The most powerful of the three concepts I’m covering this month is Reduce. (You’ll also find some references that use the term “Fold.”) Reduce returns a single scalar value that’s computed by visiting all the members of a sequence. Reduce is one of those concepts that is much simpler once you see some examples.

This simple code snippet computes the sum of all values in the sequence:

List<int> someNumbers = new List<int>
   { 1, 2, 3, 4, 5, 6, 7, 8, 9,
      10, 11, 12, 13, 14, 15 };
int sum = 0;
foreach (int num in someNumbers)
   sum += num;
Console.WriteLine(sum);

This is simple stuff that you’ve written many times. The problem is that you can’t reuse any of it anywhere. Also, many other examples will likely contain more complicated code inside the loop. So smart computer science wizards decided to take on this problem and create a way to pass along that inner code as a parameter to a generic method. In C# 3.0, the answer is the Aggregate extension method. Aggregate has a few overloads. This example uses the simplest form:

List<int> someNumbers = new List<int>
   { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
      11, 12, 13, 14, 15 };
int sum = someNumbers.Aggregate(
   delegate(int currentSum, int num)
{
   return currentSum + num;
});
Console.WriteLine(sum);

The delegate continues to produce a running sum from the current value in the sequence, as well as the total accumulated so far. There are two other overloads of Aggregate. One takes a seed value:

int sum = someNumbers.Aggregate(0,
delegate(int currentSum, int num)
{
    return currentSum + num;
});

The final overload allows you to specify a different return type. Suppose you wanted to build a comma-separated string of all the values. You’d use the third version of Aggregate:

List<int> someNumbers = new List<int>
   { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
      11, 12, 13, 14, 15 };
string formattedValues =
   someNumbers.Aggregate(null,
   delegate(string currentOutput, int num)
{
   if (currentOutput != null)
      return string.Format("{0}, {1}",
         currentOutput, num);
   else
      return num.ToString();
});
Console.WriteLine(formattedValues);

Of course, all of these can be rewritten using lambda syntax:

int sum = someNumbers.Aggregate(
   0, (currentSum, num)
   => currentSum + num);
// Or:
string formattedValues = someNumbers.Aggregate("",
   (currentOutput, num)
   => (currentOutput != null) ?
      string.Format("{0}, {1}", currentOutput, num) :
      num.ToString()
      );

The second example converts the numbers to a list of strings, and it’s a bit more complicated code. But it’s all stuff you’ve seen before. The second example uses only the ternary operator to do the test. If this makes you uncomfortable, you can use the imperative syntax with lambda expressions:

string formattedValues =
   someNumbers.Aggregate(
      "", (currentOutput, num)
   =>
   {
      if (currentOutput != null)
         return string.Format("{0}, {1}",
         currentOutput, num);
      else
         return num.ToString();
   });

Earlier, I paraphrased Tolkien, and called Reduce the one function to rule them all. From a computer science perspective, Filter and Map are nothing more than special cases of Reduce. If you define a Reduce method where the return value type is a sequence, you can implement Map and Filter using Reduce.

However, most libraries don’t work that way because Map and Filter can perform much better if they don’t share code with Reduce. And the Filter and Map prototypes are quite a bit simpler to understand.

This column contained some low-level concepts that will help you understand the computer science upon which C# 3.0, LINQ, and much of the .NET 3.5 Framework were built. It’s all stuff you’ve seen before, and it’s not that difficult. It’s just that they come with new twists and more concise code around what you’ve already been doing.

Next Page »