Friday, June 8, 2007

Asynchronous Pages in ASP.NET 2.0


ASP.NET 2.0 is replete with new features ranging from declarative data binding and Master Pages to membership and role management services. But my vote for the coolest new feature goes to asynchronous pages, and here's why.

When ASP.NET receives a request for a page, it grabs a thread from a thread pool and assigns that request to the thread. A normal, or synchronous, page holds onto the thread for the duration of the request, preventing the thread from being used to process other requests. If a synchronous request becomes I/O bound—for example, if it calls out to a remote Web service or queries a remote database and waits for the call to come back—then the thread assigned to the request is stuck doing nothing until the call returns. That impedes scalability because the thread pool has a finite number of threads available. If all request-processing threads are blocked waiting for I/O operations to complete, additional requests get queued up waiting for threads to be free. At best, throughput decreases because requests wait longer to be processed. At worst, the queue fills up and ASP.NET fails subsequent requests with 503 "Server Unavailable" errors.

Asynchronous pages offer a neat solution to the problems caused by I/O-bound requests. Page processing begins on a thread-pool thread, but that thread is returned to the thread pool once an asynchronous I/O operation begins in response to a signal from ASP.NET. When the operation completes, ASP.NET grabs another thread from the thread pool and finishes processing the request. Scalability increases because thread-pool threads are used more efficiently. Threads that would otherwise be stuck waiting for I/O to complete can now be used to service other requests. The direct beneficiaries are requests that don't perform lengthy I/O operations and can therefore get in and out of the pipeline quickly. Long waits to get into the pipeline have a disproportionately negative impact on the performance of such requests.

The ASP.NET 2.0 Beta 2 async page infrastructure suffers from scant documentation. Let's fix that by surveying the landscape of async pages. Keep in mind that this column was developed with beta releases of ASP.NET 2.0 and the .NET Framework 2.0.

Asynchronous Pages in ASP.NET 1.x

ASP.NET 1.x doesn't support asynchronous pages per se, but it's possible to build them with a pinch of tenacity and a dash of ingenuity. For an excellent overview, see Fritz Onion's article entitled "Use Threads and Build Asynchronous Handlers in Your Server-Side Web Code" in the June 2003 issue of MSDN®Magazine.

The trick here is to implement IHttpAsyncHandler in a page's codebehind class, prompting ASP.NET to process requests not by calling the page's IHttpHandler.ProcessRequest method, but by calling IHttpAsyncHandler.BeginProcessRequest instead. Your BeginProcessRequest implementation can then launch another thread. That thread calls base.ProcessRequest, causing the page to undergo its normal request-processing lifecycle (complete with events such as Load and Render) but on a non-threadpool thread. Meanwhile, BeginProcessRequest returns immediately after launching the new thread, allowing the thread that's executing BeginProcessRequest to return to the thread pool.

That's the basic idea, but the devil's in the details. Among other things, you need to implement IAsyncResult and return it from BeginProcessRequest. That typically means creating a ManualResetEvent object and signaling it when ProcessRequest returns in the background thread. In addition, you have to provide the thread that calls base.ProcessRequest. Unfortunately, most of the conventional techniques for moving work to background threads, including Thread.Start, ThreadPool.QueueUserWorkItem, and asynchronous delegates, are counterproductive in ASP.NET applications because they either steal threads from the thread pool or risk unconstrained thread growth. A proper asynchronous page implementation uses a custom thread pool, and custom thread pool classes are not trivial to write (for more information, see the .NET Matters column in the February 2005 issue of MSDN Magazine).

The bottom line is that building async pages in ASP.NET 1.x isn't impossible, but it is tedious. And after doing it once or twice, you can't help but think that there has to be a better way. Today there is—ASP.NET 2.0.

Back to top

Asynchronous Pages in ASP.NET 2.0

ASP.NET 2.0 vastly simplifies the way you build asynchronous pages. You begin by including an Async="true" attribute in the page's @ Page directive, like so:

<%@ Page Async="true" ... %>

Under the hood, this tells ASP.NET to implement IHttpAsyncHandler in the page. Next, you call the new Page.AddOnPreRenderCompleteAsync method early in the page's lifetime (for example, in Page_Load) to register a Begin method and an End method, as shown in the following code:

AddOnPreRenderCompleteAsync (
new BeginEventHandler(MyBeginMethod),
new EndEventHandler (MyEndMethod)

What happens next is the interesting part. The page undergoes its normal processing lifecycle until shortly after the PreRender event fires. Then ASP.NET calls the Begin method that you registered using AddOnPreRenderCompleteAsync. The job of the Begin method is to launch an asynchronous operation such as a database query or Web service call and return immediately. At that point, the thread assigned to the request goes back to the thread pool. Furthermore, the Begin method returns an IAsyncResult that lets ASP.NET determine when the asynchronous operation has completed, at which point ASP.NET extracts a thread from the thread pool and calls your End method. After End returns, ASP.NET executes the remaining portion of the page's lifecycle, which includes the rendering phase. Between the time Begin returns and End gets called, the request-processing thread is free to service other requests, and until End is called, rendering is delayed. And because version 2.0 of the .NET Framework offers a variety of ways to perform asynchronous operations, you frequently don't even have to implement IAsyncResult. Instead, the Framework implements it for you.

The codebehind class in Figure 1 provides an example. The corresponding page contains a Label control whose ID is "Output". The page uses the System.Net.HttpWebRequest class to fetch the contents of Then it parses the returned HTML and writes out to the Label control a list of all the HREF targets it finds.

Since an HTTP request can take a long time to return, AsyncPage.aspx.cs performs its processing asynchronously. It registers Begin and End methods in Page_Load, and in the Begin method, it calls HttpWebRequest.BeginGetResponse to launch an asynchronous HTTP request. BeginAsyncOperation returns to ASP.NET the IAsyncResult returned by BeginGetResponse, resulting in ASP.NET calling EndAsyncOperation when the HTTP request completes. EndAsyncOperation, in turn, parses the content and writes the results to the Label control, after which rendering occurs and an HTTP response goes back to the browser.

Figure 2 Synchronous vs. Asynchronous Page Processing
Figure 2 Synchronous vs. Asynchronous Page Processing

Figure 2 illustrates the difference between a synchronous page and an asynchronous page in ASP.NET 2.0. When a synchronous page is requested, ASP.NET assigns the request a thread from the thread pool and executes the page on that thread. If the request pauses to perform an I/O operation, the thread is tied up until the operation completes and the page lifecycle can be completed. An asychronous page, by contrast, executes as normal through the PreRender event. Then the Begin method that's registered using AddOnPreRenderCompleteAsync is called, after which the request-processing thread goes back to the thread pool. Begin launches an asynchronous I/O operation, and when the operation completes, ASP.NET grabs another thread from the thread pool and calls the End method and executes the remainder of the page's lifecycle on that thread.

Figure 3 Trace Output Shows Async Page's Async Point
Figure 3 Trace Output Shows Async Page's Async Point

The call to Begin marks the page's "async point." The trace in Figure 3 shows exactly where the async point occurs. If called, AddOnPreRenderCompleteAsync must be called before the async point—that is, no later than the page's PreRender event.

Back to top

Asynchronous Data Binding

It's not all that common for ASP.NET pages to use HttpWebRequest directly to request other pages, but it is common for them to query databases and data bind the results. So how would you use asynchronous pages to perform asynchronous data binding? The codebehind class in Figure 4 shows one way to go about it.

AsyncDataBind.aspx.cs uses the same AddOnPreRenderCompleteAsync pattern that AsyncPage.aspx.cs uses. But rather than call HttpWebRequest.BeginGetResponse, its BeginAsyncOperation method calls SqlCommand.BeginExecuteReader (new in ADO.NET 2.0), to perform an asynchronous database query. When the call completes, EndAsyncOperation calls SqlCommand.EndExecuteReader to get a SqlDataReader, which it then stores in a private field. In an event handler for the PreRenderComplete event, which fires after the asynchronous operation completes but before the page is rendered, it then binds the SqlDataReader to the Output GridView control. On the outside, the page looks like a normal (synchronous) page that uses a GridView to render the results of a database query. But on the inside, this page is much more scalable because it doesn't tie up a thread-pool thread waiting for the query to return.

Back to top

Calling Web Services Asynchronously

Another I/O-related task commonly performed by ASP.NET Web pages is callouts to Web services. Since Web service calls can take a long time to return, pages that execute them are ideal candidates for asynchronous processing.

Figure 5 shows one way to build an asynchronous page that calls out to a Web service. It uses the same AddOnPreRenderCompleteAsync mechanism featured in Figure 1 and Figure 4. The page's Begin method launches an asynchronous Web service call by calling the Web service proxy's asynchronous Begin method. The page's End method caches in a private field a reference to the DataSet returned by the Web method, and the PreRenderComplete handler binds the DataSet to a GridView. For reference, the Web method targeted by the call is shown in the following code:

public DataSet GetTitles ()
string connect = WebConfigurationManager.ConnectionStrings
SqlDataAdapter adapter = new SqlDataAdapter
("SELECT title_id, title, price FROM titles", connect);
DataSet ds = new DataSet();
return ds;

That's one way to do it, but it's not the only way. The .NET Framework 2.0 Web service proxies support two mechanisms for placing asynchronous calls to Web services. One is the per-method Begin and End methods featured in .NET Framework 1.x. and 2.0 Web service proxies. The other is the new MethodAsync methods and MethodCompleted events found only in the Web service proxies of the .NET Framework 2.0.

If a Web service has a method named Foo, then in addition to having methods named Foo, BeginFoo, and EndFoo, a .NET Framework version 2.0 Web service proxy includes a method named FooAsync and an event named FooCompleted. You can call Foo asynchronously by registering a handler for FooCompleted events and calling FooAsync, like this:

proxy.FooCompleted += new FooCompletedEventHandler (OnFooCompleted);
proxy.FooAsync (...);
void OnFooCompleted (Object source, FooCompletedEventArgs e)
// Called when Foo completes

When the asynchronous call begun by FooAsync completes, a FooCompleted event fires, causing your FooCompleted event handler to be called. Both the delegate wrapping the event handler (FooCompletedEventHandler) and the second parameter passed to it (FooCompletedEventArgs) are generated along with the Web service proxy. You can access Foo's return value through FooCompletedEventArgs.Result.

Figure 6 presents a codebehind class that calls a Web service's GetTitles method asynchronously using the MethodAsync pattern. Functionally, this page is identical to the one in Figure 5. Internally, it's quite different. AsyncWSInvoke2.aspx includes an @ Page Async="true" directive, just like AsyncWSInvoke1.aspx. But AsyncWSInvoke2.aspx.cs doesn't call AddOnPreRenderCompleteAsync; it registers a handler for GetTitlesCompleted events and calls GetTitlesAsync on the Web service proxy. ASP.NET still delays rendering the page until GetTitlesAsync completes. Under the hood, it uses an instance of System.Threading.SynchronizationContext, another new class in 2.0, to receive notifications when the asynchronous call begins and when it completes.

There are two advantages to using MethodAsync rather than AddOnPreRenderCompleteAsync to implement asynchronous pages. First, MethodAsync flows impersonation, culture, and HttpContext.Current to the MethodCompleted event handler. AddOnPreRenderCompleteAsync does not. Second, if the page makes multiple asynchronous calls and must delay rendering until all the calls have been completed, using AddOnPreRenderCompleteAsync requires you to compose an IAsyncResult that remains unsignaled until all the calls have completed. With MethodAsync, no such hijinks are necessary; you simply place the calls, as many of them as you like, and the ASP.NET engine delays the rendering phase until the final call returns.

Back to top

Asynchronous Tasks

MethodAsync is a convenient way to make multiple asynchronous Web service calls from an asynchronous page and delay the rendering phase until all the calls complete. But what if you want to perform several asynchronous I/O operations in an asynchronous page and those operations don't involve Web services? Does that mean you're back to composing an IAsyncResult that you can return to ASP.NET to let it know when the last call has completed? Fortunately, no.

In ASP.NET 2.0, the System.Web.UI.Page class introduces another method to facilitate asynchronous operations: RegisterAsyncTask. RegisterAsyncTask has four advantages over AddOnPreRenderCompleteAsync. First, in addition to Begin and End methods, RegisterAsyncTask lets you register a timeout method that's called if an asynchronous operation takes too long to complete. You can set the timeout declaratively by including an AsyncTimeout attribute in the page's @ Page directive. AsyncTimeout="5" sets the timeout to 5 seconds. The second advantage is that you can call RegisterAsyncTask several times in one request to register several async operations. As with MethodAsync, ASP.NET delays rendering the page until all the operations have completed. Third, you can use RegisterAsyncTask's fourth parameter to pass state to your Begin methods. Finally, RegisterAsyncTask flows impersonation, culture, and HttpContext.Current to the End and Timeout methods. As mentioned earlier in this discussion, the same is not true of an End method registered with AddOnPreRenderCompleteAsync.

In other respects, an asynchronous page that relies on RegisterAsyncTask is similar to one that relies on AddOnPreRenderCompleteAsync. It still requires an Async="true" attribute in the @ Page directive (or the programmatic equivalent, which is to set the page's AsyncMode property to true), and it still executes as normal through the PreRender event, at which time the Begin methods registered using RegisterAsyncTask are called and further request processing is put on hold until the last operation completes.To demonstrate, the codebehind class in Figure 7 is functionally equivalent to the one in Figure 1, but it uses RegisterTaskAsync instead of AddOnPreRenderCompleteAsync. Note the timeout handler named TimeoutAsyncOperation, which is called if HttpWebRequest.BeginGetRequest takes too long to complete. The corresponding .aspx file includes an AsyncTimeout attribute that sets the timeout interval to 5 seconds. Also note the null passed in RegisterAsyncTask's fourth parameter, which could have been used to pass data to the Begin method.

The primary advantage of RegisterAsyncTask is that it allows asynchronous pages to fire off multiple asynchronous calls and delay rendering until all the calls have completed. It works perfectly well for one asynchronous call, too, and it offers a timeout option that AddOnPreRenderCompleteAsync doesn't. If you build an asynchronous page that makes just one async call, you can use AddOnPreRenderCompleteAsync or RegisterAsyncTask. But for asynchronous pages that place two or more async calls, RegisterAsyncTask simplifies your life considerably.

Since the timeout value is a per-page rather than per-call setting, you may be wondering whether it's possible to vary the timeout value for individual calls. The short answer is no. You can vary the timeout from one request to the next by programmatically modifying the page's AsyncTimeout property, but you can't assign different timeouts to different calls initiated from the same request.

Back to top

Wrapping It Up

So there you have it—the skinny on asynchronous pages in ASP.NET 2.0. They're significantly easier to implement in this upcoming version of ASP.NET, and the architecture is such that you can batch multiple async I/O operations in one request and delay the rendering of the page until all the operations have completed. Combined with async ADO.NET and other new asynchronous features in the .NET Framework, async ASP.NET pages offer a powerful and convenient solution to the problem of I/O-bound requests that inhibit scalability by saturating the thread pool.

A final point to keep in mind as you build asynchronous pages is that you should not launch asynchronous operations that borrow from the same thread pool that ASP.NET uses. For example, calling ThreadPool.QueueUserWorkItem at a page's asynchronous point is counterproductive because that method draws from the thread pool, resulting in a net gain of zero threads for processing requests. By contrast, calling asynchronous methods built into the Framework, methods such as HttpWebRequest.BeginGetResponse and SqlCommand.BeginExecuteReader, is generally considered to be safe because those methods tend to use completion ports to implement asynchronous behavior.

Codebehind and Compilation in ASP.NET 2.0


As I write this column, the release candidates of the Microsoft® .NET Framework 2.0 and Visual Studio® 2005 have just come out, and by the time you read this, they will both already be on the shelves. It feels like it's been a long time coming.

I remember sitting in a room on the Microsoft campus in August of 2003 listening to Scott Guthrie and others (including my fellow columnist, Rob Howard) present the wide array of new features coming in ASP.NET 2.0. They astounded us with one demo after another of features that greatly simplified Web development, and did so in a pluggable and extensible fashion so that changes could be made at any level as needed during the development process.

Quite a bit has changed in the subsequent beta releases, mostly in the form of refinements, bug fixes, and control additions. However, one feature—the codebehind model—has changed rather dramatically since that first preview, primarily in response to customer feedback. Now on the cusp of the release, I thought I would take this opportunity to describe this new codebehind model, the rationale behind it, and how you as a Web developer will use it. I will also cover some of the potentially unexpected side effects of this model and how to plan for them in your designs. Note that the ASP.NET 2.0 runtime fully supports the 1.x model, so applications written for 1.x can run without modification.


Although the codebehind model is different in 2.0, its syntax has changed little. In fact, the change is so subtle that you may not even notice it unless you look really closely. Figure 1 shows the new codebehind syntax.

There are two differences between this model and the previous 1.x model—the introduction of the CodeFile attribute in the @ Page directive and the declaration of the codebehind class as a partial class. As you start building the page, you will notice another difference—server-side controls no longer need to be explicitly declared in your codebehind class, but you still have complete access to them programmatically. For example, the form in Figure 2 has several server-side controls that are used programmatically in the codebehind file, but notice the absence of any explicit control declarations in the codebehind class.

The reason this works has to do with the partial keyword applied to your codebehind class. In addition to turning your .aspx file into a class definition with methods for rendering the page, as it has always done, ASP.NET now also generates a sibling partial class for your codebehind class that contains protected control member variable declarations. Your class is then compiled together with this generated class definition and used as the base class for the class generated for the .aspx file. The end result is that you essentially write codebehind classes the way you always have, but you no longer have to declare (or let the designer declare for you) member variable declarations of server-side controls. This was always a somewhat fragile relationship in 1.x, since if you ever accidentally modified one of the control declarations so that it no longer matched the ID of the control declared on the form, things suddenly stopped working. Now the member variables are declared implicitly and will always be correct. Figure 3 shows an example set of classes involved.

Note that the partial class model is only used if you use the CodeFile keyword in your @ Page directive. If you use the Inherits keyword without CodeFile (or with the src attribute instead), ASP.NET resorts to the 1.x codebehind style and simply places your class as the sole base class for the .aspx file. Also, if you have no codebehind at all, the class generation acts very much the same as it does in 1.x. Since ASP.NET 2.0 is backwards compatible with 1.x, there is now a range of codebehind options at your disposal.

Visual Studio 2005 will use the new partial class codebehind model for any Web Forms, and it will also happily convert Visual Studio .NET 2003 projects to use the new model as well if you use the conversion wizard. It is best, if possible, to convert all files to the new codebehind model, since some of the new features of ASP.NET 2.0 depend on it (if you're using Visual Studio, converting is pretty much the only option, since Visual Studio 2005 won't open unconverted 1.x projects). For example, strongly typed access to the Profile property bag is added to the sibling partial class for codebehind classes in 2.0, but if you use the 1.x codebehind model, that strongly typed accessor is added directly to the .aspx generated class definition, and will be unavailable to your codebehind class. This is also true for strongly typed Master Page and previous page access.

Back to top


At this point, you may be wondering why the ASP.NET team bothered to use inheritance at all with this new codebehind model. ASP.NET could easily generate all of the control variable declarations in addition to the rendering methods from the .aspx file as a partial class which could then be merged with your simplified codebehind class. This is exactly how Windows Forms works in the .NET Framework 2.0. All of the designer-generated code is placed into a sibling partial class which is then merged with your application logic and event handlers into a single Form-derived class, creating a clean separation between machine-generated code and developer code without resorting to inheritance.

Well, it turns out that the original implementation of codebehind in ASP.NET 2.0 did exactly this—the codebehind class was just a partial class that was merged with the parsed .aspx file class definition. It was simple and effective, but unfortunately, not flexible enough. The problem with this model was that it was no longer possible to deploy the codebehind files in precompiled binary assemblies along with intact .aspx files since they now had to be compiled at the same time (a restriction when using partial classes is that all partial pieces of a class must be merged during a single compilation, and class definitions cannot span assemblies). This restriction was unacceptable to many developers as they were already used to being able to deploy binary codebehind assemblies along with intact .aspx files which could then be updated in place without having to recompile. This is, in fact, the exact model used by default in Visual Studio .NET 2003, and is thus very prevalent in practice.

As a result of reintroducing the inheritance model and shifting the partial class into the base class, .aspx files can now be deployed and compiled independently from the codebehind class. To complete the picture, you need some way to generate the sibling partial classes containing control variable declarations during compilation or deployment since this was always done in the past on demand in response to requests. Enter the ASP.NET compiler.

The ASP.NET compiler (aspnet_compiler.exe) was originally introduced in ASP.NET 2.0 as a way of completely precompiling an entire site, making it possible to deploy nothing but binary assemblies (even .aspx and .ascx files are precompiled). This is compelling because it eliminates any on-demand compilation when requests are made, eliminating the first postdeployment hit seen in some sites today. It also makes it more difficult for modifications to be made to the deployed site (since you can't just open .aspx files and change things), which can be appealing when deploying applications that you want to be changed only through a standard deployment process. The compiler that ships with the release version of ASP.NET 2.0 supports this binary-only deployment model, but it has also been enhanced to support an updatable deployment model, where all source code in a site is precompiled into binary assemblies, but all .aspx and .ascx files are left basically intact so that changes can be made on the server (the only changes to the .aspx and .ascx files involve the CodeFile attribute being removed and the Inherits attribute being modified to include the assembly name). This model is possible because of the reintroduction of inheritance in the codebehind model, so that the sibling partial classes containing control declarations can be generated and compiled independently of the actual .aspx file class definitions.

Figure 4 Binary Deployment with aspnet_compiler.exe
Figure 4 Binary Deployment with aspnet_compiler.exe

Figure 4 shows an invocation of the aspnet_compiler.exe utility using the binary deployment option, and the resulting output to a deployment directory. Note that the .aspx files in the deployment directory are just marker files with no content. They have been left there to ensure that a file with the endpoint name is present in case the "Check that file exists" option for the .aspx extension in an IIS app is set. The PrecompiledApp.config file is used to keep track of how the app was deployed and whether ASP.NET needs to compile any files at request time. To generate the "updatable" site, you would add a -u to the command line, and the resulting .aspx files would contain their original content (and not be empty marker files). Note that this functionality can also be accessed graphically through the Build | Publish Web Site menu item of Visual Studio 2005, as you can see in Figure 5. Both the command-line tool and Visual Studio rely on the ClientBuildManager class of the System.Web.Compilation namespace to provide this functionality.

Figure 5 Build | Publish Web Site Tool in Visual Studio 2005
Figure 5 Build | Publish Web Site Tool in Visual Studio 2005

With the aspnet_compiler utility in hand, you can work on your application without worrying about how it will be deployed for the most part, since any site can now be deployed in any of three ways—all source, all binary, or updatable (source code in binary and .aspx files in source)—without any modification to page attributes or code files used in development. This was not possible in previous releases of ASP.NET since you had to decide at development time whether to use the src attribute to reference codebehind files or to precompile them and deploy the assemblies to the /bin directory. Complete binary deployment was not even an option.

Back to top

Assembly Generation

Now that compilation into assemblies can happen in one of three places (either explicitly by the developer, using aspnet_compiler.exe, or during request processing), understanding the mapping of files into assemblies becomes even more important. In fact, depending on how you write your pages, you can actually end up with an application that works fine when deployed as all source or all binary, but which fails to compile when deployed using the updatable switch.

The model ASP.NET generally uses creates separate assemblies for the contents of the App_Code directory as well as the global.asax file (if present), and then compiles all of the .aspx pages in each directory into a separate assembly. (If pages in the same directory are authored in different languages or if they have dependencies on each other through an @ Reference directive, they could also end up in separate assemblies.) User controls and Master Pages are also typically compiled independently from .aspx pages. It is also possible to configure the App_Code directory to create multiple assemblies if, for example, you wanted to include both Visual Basic® and C# source code in a project. There are some subtleties in the details of assembly creation, depending on which mode of deployment you have chosen. Figure 6 describes the components of your Web site that compile into separate assemblies based on the deployment mode you are using. (Note that I am ignoring the resource, theme, and browser directories since they don't contain code, although they are compiled into separate assemblies as well. The target assembly can also differ based on language variance and reference dependencies, as mentioned previously.)

The only other twist in the assembly generation picture is that you can use the -fixednames option of aspnet_compiler to request that each .aspx file be compiled into a separate assembly whose name remains the same across different invocations of the compiler. This can be useful if you want to be able to update individual pages without modifying other assemblies on the deployment site. It can also generate a large number of assemblies for sites of any significant size, so be sure to test your deployment before depending on this option.

If this sounds complicated, the good news is that most of the time you shouldn't have to think about which files map to separate assemblies. Your .aspx files are always compiled last, and always include references to all other generated assemblies, so typically things will just work no matter what deployment model you choose.

One of the key differences in deployment that may actually affect the way you author code in your pages is the split in compilation when using updatable deployments. When you deploy an updatable site, the codebehind files are compiled into separate assemblies prior to deployment. The classes generated from the .aspx files are not compiled until a request is actually made for a file in a directory. This is in contrast to binary deployment, in which all files are compiled prior to deployment, and to source deployment, in which all files are compiled at request time. As a simple example of how this can cause problems, consider the user control (.ascx file) in Figure 7 with an embedded property, and an associated page that uses the control and sets the property from its codebehind class.

The page in Figure 7 will compile and run in either source or binary deployment mode, but will fail to compile when deployed as an updatable site since the definition of the Color property of the user control is unavailable at deployment time (this limitation also existed in the 1.x model). You can typically avoid issues like this by keeping all code in codebehind files or, at the other extreme, not using codebehind files at all and leaving code directly in .aspx and .ascx files.

Another thing to keep in mind when considering the file-to-assembly mapping is that the use of the internal keyword to prevent external assemblies from accessing methods in your classes may work in some deployment scenarios and not others, because of the different assembly mapping options. Unless you plan ahead of time which deployment option you will be using, it is probably best to avoid internal methods in your pages and stick to the type-scoped protection keywords: public, protected, and private.

Back to top


The new codebehind model in ASP.NET 2.0 seems both familiar and foreign to ASP.NET developers. It's familiar because it still uses inheritance to relate codebehind classes with their .aspx generated class definitions, and yet foreign elements like partial classes and the implicit generation of control member variable declarations are fundamental shifts. In practice, you will probably not notice much difference in usage, but it will be important to understand the class relationships and assembly mappings outlined here whenever you are doing something out of the ordinary, like creating a common base Page class or mixing codebehind and inline code models.

Create Advanced Web Applications With Object-Oriented Techniques


ecently I interviewed a software developer with five years experience in developing Web applications. She’d been doing JavaScript for four and a half years, she rated her JavaScript skill as very good, and—as I found out soon after—she actually knew very little about JavaScript. I didn’t really blame her for that, though. JavaScript is funny that way. It’s the language a lot of people (including myself, until recently!) assume they’re good at, just because they know C/C++/C# or they have some prior programming experience.

In a way, that assumption is not entirely groundless. It is easy to do simple things with JavaScript. The barrier to entry is very low; the language is forgiving and doesn’t require you to know a lot of things before you can start coding in it. Even a non-programmer can probably pick it up and write some useful scripts for a homepage in a matter of hours.

Indeed, until recently, I’d always been able to get by with whatever little JavaScript I knew, armed only with the MSDN® DHTML reference and my C++/C# experience. It was only when I started working on real-world AJAX applications that I realized how inadequate my JavaScript actually was. The complexity and interactivity of this new generation of Web applications requires a totally different approach to writing JavaScript code. These are serious JavaScript applications! The way we’ve been writing our throwaway scripts simply doesn’t cut it anymore.

Object-oriented programming (OOP) is one popular approach that’s used in many JavaScript libraries to make a codebase more manageable and maintainable. JavaScript supports OOP, but it does so in a very different manner from the way popular Microsoft® .NET Framework compliant languages like C++, C#, or Visual Basic® do it, so developers who have been working extensively with those languages may find doing OOP in JavaScript strange and counter-intuitive at first. I wrote this article to discuss in depth how the JavaScript language really supports object-oriented programming and how you can use this support to do object-oriented development effectively in JavaScript. Let’s start by talking about (what else?) objects.

JavaScript Objects Are Dictionaries

In C++ or C#, when we’re talking about objects, we’re referring to instances of classes or structs. Objects have different properties and methods, depending on which templates (that is, classes) they are instantiated from. That’s not the case with JavaScript objects. In JavaScript, objects are just collections of name/value pairs—think of a JavaScript object as a dictionary with string keys. We can get and set the properties of an object using either the familiar "." (dot) operator, or the "[]" operator, which is typically used when dealing with a dictionary. The following snippet

var userObject = new Object();
userObject.lastLoginTime = new Date();
does exactly the same thing as this:
var userObject = {}; // equivalent to new Object()
userObject[“lastLoginTime”] = new Date();
We can also define the lastLoginTime property directly within userObject’s definition like this:
var userObject = { “lastLoginTime”: new Date() };

Note how similar it is to the C# 3.0 object initializers. Also, those of you familiar with Python will recognize that the way we instantiate userObject in the second and third snippets is exactly how we’d specify a dictionary in Python. The only difference is that a JavaScript object/dictionary only accepts string keys, rather than hashable objects like a Python dictionary would.

These examples also show how much more malleable JavaScript objects are than C++ or C# objects. Property lastLoginTime doesn’t have to be declared beforehand—if userObject doesn’t have a property by that name, it will simply be added to userObject. This isn’t surprising if you remember that a JavaScript object is a dictionary—after all, we add new keys (and their respective values) to dictionaries all the time.

So, there we have object properties. How about object methods? Again, JavaScript is different from C++/C#. To understand object methods, I first need to take a closer look at JavaScript functions.

Back to top

JavaScript Functions Are First Class

In many programming languages, functions and objects are usually considered two different things. In JavaScript, this distinction is blurred—a JavaScript function is really an object with executable code associated with it. Consider an ordinary function like this:

function func(x) {
This is how we usually define a function in JavaScript. But you can also define the same function as follows, where you create an anonymous function object, and assign it to variable func
var func = function(x) {
or even like this, using the Function constructor:
var func = new Function(“x”, “alert(x);”);

This shows that a function is really just an object that supports a function call operation. That last way of defining a function using the Function constructor is not commonly used, but it opens up interesting possibilities because, as you may notice, the body of the function is just a String parameter to the Function constructor. That means you can construct arbitrary functions at run time.

To demonstrate further that a function is an object, you can set or add properties to a function, just like you would to any other JavaScript objects:

function sayHi(x) {
alert(“Hi, “ + x + “!”);
sayHi.text = “Hello World!”;
sayHi[“text2”] = “Hello World... again.”;

alert(sayHi[“text”]); // displays “Hello World!”
alert(sayHi.text2); // displays “Hello World... again.”

As objects, functions can also be assigned to variables, passed as arguments to other functions, returned as the values of other functions, stored as properties of objects or elements of arrays, and so on. Figure 1 provides an example of this.

With that in mind, adding methods to an object is as easy as choosing a name and assigning a function to that name. So I define three methods in the object by assigning anonymous functions to the respective method names:

var myDog = {
“name” : “Spot”,
“bark” : function() { alert(“Woof!”); },
“displayFullName” : function() {
alert( + “ The Alpha Dog”);
“chaseMrPostman” : function() {
// implementation beyond the scope of this article
myDog.bark(); // Woof!

The use of the "this" keyword inside the function displayFullName should be familiar to the C++/C# developers among us—it refers to the object through which the method is called ( developers who use Visual Basic should find it familiar, too—it’s called "Me" in Visual Basic). So in the example above, the value of "this" in the displayFullName is the myDog object. The value of "this" is not static, though. Called through a different object, the value of "this" will also change to point to that object as Figure 2 demonstrates.

The last line in Figure 2 shows an alternative way of calling a function as a method of an object. Remember, a function in JavaScript is an object. Every function object has a method named call, which calls the function as a method of the first argument. That is, whichever object we pass into call as its first argument will become the value of "this" in the function invocation. This will be a useful technique for calling the base class constructor, as we’ll see later.

One thing to remember is never to call functions that contain "this" without an owning object. If you do, you will be trampling over the global namespace, because in that call, "this" will refer to the Global object, and that can really wreak havoc in your application. For example, below is a script that changes the behavior of JavaScript’s global function isNaN. Definitely not recommended!

alert(“NaN is NaN: “ + isNaN(NaN));

function x() {
this.isNaN = function() {
return “not anymore!”;
// alert!!! trampling the Global object!!!

alert(“NaN is NaN: “ + isNaN(NaN));

So we’ve seen ways to create an object, complete with its properties and methods. But if you notice all the snippets above, the properties and methods are hardcoded within the object definition itself. What if you need more control over the object creation? For example, you may need to calculate the values of the object’s properties based on some parameters. Or you may need to initialize the object’s properties to the values that you’ll only have at run time. Or you may need to create more than one instance of the object, which is a very common requirement.

In C#, we use classes to instantiate object instances. But JavaScript is different since it doesn’t have classes. Instead, as you’ll see in the next section, you take advantage of the fact that functions act as constructors when used together with the "new" operator.

Back to top

Constructor Functions but No Classes

The strangest thing about JavaScript OOP is that, as noted, JavaScript doesn’t have classes like C# or C++ does. In C#, when you do something like this:

Dog spot = new Dog();
you get back an object, which is an instance of the class Dog. But in JavaScript there’s no class to begin with. This closest you can get to a class is by defining a constructor function like this:
function DogConstructor(name) { = name;
this.respondTo = function(name) {
if( == name) {

var spot = new DogConstructor(“Spot”);
spot.respondTo(“Rover”); // nope
spot.respondTo(“Spot”); // yeah!
OK, so what’s happening here? Ignore the DogConstructor function definition for a moment and examine this line:
var spot = new DogConstructor(“Spot”);

What the "new" operator does is simple. First, it creates a new empty object. Then, the function call that immediately follows is executed, with the new empty object set as the value of "this" within that function. In other words, the line above with the "new" operator can be thought of as similar to the two lines below:

// create an empty object
var spot = {};
// call the function as a method of the empty object, “Spot”);
As you can see in the body of DogConstructor, invoking this function initializes the object to which the keyword “this” refers during that invocation. This way, you have a way of creating a template for objects! Whenever you need to create a similar object, you call “new” together with the constructor function, and you get back a fully initialized object as a result. Sounds very similar to a class, doesn’t it? In fact, usually in JavaScript the name of the constructor function is the name of the class you’re simulating, so in the example above you can just name the constructor function Dog:
// Think of this as class Dog
function Dog(name) {
// instance variable = name;
// instance method? Hmmm...
this.respondTo = function(name) {
if( == name) {

var spot = new Dog(“Spot”);

In the Dog definition above, I defined an instance variable called name. Every object that is created using Dog as its constructor function will have its own copy of the instance variable name (which, as noted earlier, is just an entry into the object’s dictionary). This is expected; after all, each object does need its own copies of instance variables to carry its state. But if you look at the next line, every instance of Dog also has its own copy of the respondTo method, which is a waste; you only need one instance of respondTo to be shared among Dog instances! You can work around the problem by taking the definition of respondTo outside Dog, like this:

function respondTo() {
// respondTo definition

function Dog(name) { = name;
// attached this function as a method of the object
this.respondTo = respondTo;

This way, all instances of Dog (that is, all instances created with the constructor function Dog) can share just one instance of the method respondTo. But as the number of methods grow, this becomes harder and harder to maintain. You end up with a lot of global functions in your codebase, and things only get worse as you have more and more "classes," especially if their methods have similar names. There’s a better way to achieve this using the prototype objects, which are the topic of the next section.

Back to top


The prototype object is a central concept in object-oriented programming with JavaScript. The name comes from the idea that in JavaScript, an object is created as a copy of an existing example (that is, a prototype) object. Any properties and methods of this prototype object will appear as properties and methods of the objects created from that prototype’s constructor. You can say that these objects inherit their properties and methods from their prototype. When you create a new Dog object like this

var buddy = new Dog(“Buddy“);
the object referenced by buddy inherits properties and methods from its prototype, although it’s probably not obvious from just that one line where the prototype comes from. The prototype of the object buddy comes from a property of the constructor function (which, in this case, is the function Dog).

In JavaScript, every function has a property named "prototype" that refers to a prototype object. This prototype object in turn has a property named "constructor," which refers back to the function itself. It’s sort of a circular reference; Figure 3 illustrates this cyclic relationship better.

Figure 3 Every Function’s Prototype Has a Constructor Property
Figure 3 Every Function’s Prototype Has a Constructor Property

Now, when a function (in the example above, Dog) is used to create an object with the "new" operator, the resulting object will inherit the properties of Dog.prototype. In Figure 3, you can see that the Dog.prototype object has a constructor property that points back to the Dog function. Consequently, every Dog object (that inherits from Dog.prototype) will also appear to have a constructor property that points back to the Dog function. The code in Figure 4 confirms this. This relationship between constructor function, prototype object, and the object created with them is depicted in Figure 5.

Figure 5 Instances Inherit from Their Prototype
Figure 5 Instances Inherit from Their Prototype

Some of you may have noticed the calls to hasOwnProperty and isPrototypeOf method in Figure 4. Where do these methods come from? They don’t come from Dog.prototype. In fact, there are other methods like toString, toLocaleString, and valueOf that we can call on Dog.prototype and instances of Dog, but which don’t come from Dog.prototype at all. It turns out that just like the .NET Framework has System.Object, which serves as the ultimate base class for all classes, JavaScript has Object.prototype, which is the ultimate base prototype for all prototypes. (The prototype of Object.prototype is null.)

In this example, remember that Dog.prototype is an object. It is created with a call to the Object constructor function, although it is not visible:

Dog.prototype = new Object();

So just like instances of Dog inherit from Dog.prototype, Dog.prototype inherits from Object.prototype. This makes all instances of Dog inherit Object.prototype’s methods and properties as well.

Every JavaScript object inherits a chain of prototypes, all of which terminate with Object.prototype. Note that this inheritance you’ve seen so far is inheritance between live objects. It is different from your usual notion of inheritance, which happens between classes when they are declared. Consequently, JavaScript inheritance is much more dynamic. It is done using a simple algorithm, as follows: when you try to access a property/method of an object, JavaScript checks if that property/method is defined in that object. If not, then the object’s prototype will be checked. If not, then that object’s prototype’s prototype will be checked, and so on, all the way to Object.prototype. Figure 6 illustrates this resolution process.

Figure 6 Resolving toString() Method in the Prototype Chain
Figure 6 Resolving toString() Method in the Prototype Chain (Click the image for a larger view)

The way JavaScript resolves properties access and method calls dynamically has some consequences:

  • Changes made to a prototype object are immediately visible to the objects that inherit from it, even after these objects are created.
  • If you define a property/method X in an object, a property/method of the same name will be hidden in that object’s prototype. For instance, you can override Object.prototype’s toString method by defining a toString method in Dog.prototype.
  • Changes only go in one direction, from prototype to its derived objects, but not vice versa.

Figure 7 illustrates these consequences. Figure 7 also shows how to solve the problem of unnecessary method instances as encountered earlier. Instead of having a separate instance of a function object for every object, you can make the objects share the method by putting it inside the prototype. In this example, the getBreed method is shared by rover and spot—until you override the toString method in spot, anyway. After that, spot has its own version of the getBreed method, but the rover object and subsequent objects created with new GreatDane will still share that one instance of the getBreed method defined in the GreatDane.prototype object.

Back to top

Static Properties and Methods

Sometimes you need properties or methods that are tied to classes instead of instances—that is, static properties and methods. JavaScript makes this easy, since functions are objects whose properties and methods can be set as desired. Since a constructor function represents a class in JavaScript, you can add static methods and properties to a class simply by setting them in the constructor function like this:

    function DateTime() { }

// set static method now() = function() {
return new Date();


The syntax for calling the static methods in JavaScript is virtually identical to how you’d do it in C#. This shouldn’t come as a surprise since the name of the constructor function is effectively the name of the class. So you have classes, and you have public properties/methods, and static properties/methods. What else do you need? Private members, of course. But JavaScript doesn’t have native support for private members (nor for protected, for that matter). All properties and methods of an object are accessible to anyone. There is a way to have private members in your class, but to do so you first need to understand closures.

Back to top


I didn’t learn JavaScript of my own volition. I had to pick it up quickly because I realized that I was ill-prepared to work on a real-world AJAX application without it. At first, I felt like I had gone down a few levels in the programmer hierarchy. (JavaScript! What would my C++ friends say?) But once I got over my initial resistance, I realized that JavaScript was actually a powerful, expressive, and compact language. It even boasts features that other, more popular languages are only beginning to support.

One of JavaScript’s more advanced features is its support for closures, which C# 2.0 supports through its anonymous methods. A closure is a runtime phenomenon that comes about when an inner function (or in C#, an inner anonymous method) is bound to the local variables of its outer function. Obviously, it doesn’t make much sense unless this inner function is somehow made accessible outside the outer function. An example will make this clearer.

Let’s say you need to filter a sequence of numbers based on a simple criterion that only numbers bigger than 100 can pass, while the rest are filtered out. You can write a function like the one in Figure 8.

But now you want to create a different filtering criterion, let’s say this time only numbers bigger than 300. You can do something like this:

var greaterThan300 = filter(
function(x) { return (x > 300) ? true : false; },

And then maybe you need to filter numbers that are bigger than 50, 25, 10, 600, and so on, but then, being the smart person you are, you realize that they’re all the same predicate, "greater than." Only the number is different. So you can factor the number out with a function like this

function makeGreaterThanPredicate(lowerBound) {
return function(numberToCheck) {
return (numberToCheck > lowerBound) ? true : false;
which lets you do something like this:
var greaterThan10 = makeGreaterThanPredicate(10);
var greaterThan100 = makeGreaterThanPredicate(100);
alert(filter(greaterThan10, someRandomNumbers));
alert(filter(greaterThan100, someRandomNumbers));

Watch the inner anonymous function returned by the function makeGreaterThanPredicate. That anonymous inner function uses lowerBound, which is an argument passed to makeGreaterThanPredicate. By the usual rules of scoping, lowerBound goes out of scope when makeGreater­ThanPredicate exits! But in this case, that inner anonymous function still carries lowerBound with it, even long after make­GreaterThanPredicate exits. This is what we call closure—because the inner function closes over the environment (that is, the arguments and local variables of the outer function) in which it is defined.

Closures may not seem like a big deal at first. But used properly, they open up interesting new possibilities in the way you can translate your ideas into code. One of the most interesting uses of closures in JavaScript is to simulate private variables of a class.

Back to top

Simulating Private Properties

OK, so let’s see how closures can help in simulating private members. A local variable in a function is normally not accessible from outside the function. After the function exits, for all practical purposes that local variable is gone forever. However, when that local variable is captured by an inner function’s closure, it lives on. This fact is the key to simulating JavaScript private properties. Consider the following Person class:

function Person(name, age) {
this.getName = function() { return name; };
this.setName = function(newName) { name = newName; };
this.getAge = function() { return age; };
this.setAge = function(newAge) { age = newAge; };

The arguments name and age are local to the constructor function Person. The moment Person returns, name and age are supposed to be gone forever. However, they are captured by the four inner functions that are assigned as methods of a Person instance, in effect making name and age live on, but only accessible strictly through these four methods. So you can do this:

var ray = new Person(“Ray”, 31);
ray.setName(“Younger Ray”);
// Instant rejuvenation!
alert(ray.getName() + “ is now “ + ray.getAge() +
“ years old.”);

Private members that don’t get initialized in the constructor can be local variables of the constructor function, like this:

function Person(name, age) {
var occupation;
this.getOccupation = function() { return occupation; };
this.setOccupation = function(newOcc) { occupation =
newOcc; };

// accessors for name and age
Note that these private members are slightly different from what we’d expect from private members in C#. In C#, the public methods of a class can access its private members. But in JavaScript, the private members are accessible only through methods that have these private members within their closures (these methods are usually called privileged methods, since they are different from ordinary public methods). So within Person’s public methods, you still have to access a private member through its privileged accessors methods:
Person.prototype.somePublicMethod = function() {
// doesn’t work!
// alert(;
// this one below works

Douglas Crockford is widely known as the first person to discover (or perhaps publish) the technique of using closures to simulate private members. His Web site,, contains a wealth of information on JavaScript—any developer interested in JavaScript should check it out.

Back to top

Inheriting from Classes

OK, you’ve seen how constructor functions and prototype objects allow you to simulate classes in JavaScript. You’ve seen that the prototype chain ensures that all objects have the common methods of Object.prototype. You’ve seen how you can simulate private members of a class using closures. But something is missing here. You haven’t seen how you can derive from your class; that’s an everyday activity in C#. Unfortunately, inheriting from a class in JavaScript is not simply a matter of typing a colon like in C#; it takes more than that. On the other hand, JavaScript is so flexible that there are a lot of ways of inheriting from a class.

Let’s say, for example, you have a base class Pet, with one derived class Dog, as in Figure 9. How do you go about this in JavaScript? The Pet class is easy. You’ve seen how you can do this:

Figure 9 Classes
Figure 9 Classes
// class Pet
function Pet(name) {
this.getName = function() { return name; };
this.setName = function(newName) { name = newName; };

Pet.prototype.toString = function() {
return “This pet’s name is: “ + this.getName();
// end of class Pet

var parrotty = new Pet(“Parrotty the Parrot”);

Now what if you want to create a class Dog, which derives from Pet? As you can see in Figure 9, Dog has an extra property, breed, and it overrides Pet’s toString method (note that convention for JavaScript is to use camel casing for methods and properties names, instead of Pascal casing as is recommended with C#). Figure 10 shows how it is done.

The prototype-replacement trick used sets the prototype chain properly, so instanceof tests work as expected if you were using C#. Also, the privileged methods still work as expected.

Back to top

Simulating Namespaces

In C++ and C#, namespaces are used to minimize the probability of name collisions. In the .NET Framework, namespaces help differentiate Microsoft.Build.Task.Message class from Sys­tem.Messaging.Message, for example. JavaScript doesn’t have any specific language features to support namespaces, but it’s easy to simulate a namespace using objects. Let’s say you want to create a JavaScript library. Instead of defining functions and classes globally, you can wrap them in a namespace like this:

var MSDNMagNS = {};

MSDNMagNS.Pet = function(name) { // code here };
MSDNMagNS.Pet.prototype.toString = function() { // code };

var pet = new MSDNMagNS.Pet(“Yammer”);

One level of namespace may not be unique, so you can create nested namespaces:

var MSDNMagNS = {};
// nested namespace “Examples”
MSDNMagNS.Examples = {};

MSDNMagNS.Examples.Pet = function(name) { // code };
MSDNMagNS.Examples.Pet.prototype.toString = function() { // code };

var pet = new MSDNMagNS.Examples.Pet(“Yammer”);
As you can imagine, typing those long nested namespaces can get tiresome pretty fast. Fortunately, it’s easy for the users of your library to alias your namespace into something shorter:
// MSDNMagNS.Examples and Pet definition...

// think “using Eg = MSDNMagNS.Examples;”
var Eg = MSDNMagNS.Examples;
var pet = new Eg.Pet(“Yammer”);

If you take a look at the source code of the Microsoft AJAX Library, you’ll see that the library’s authors use a similar technique to implement namespaces (take a look at the implementation of the static method Type.registerNamespace). See the sidebar "OOP and ASP.NET AJAX" for more information.

Back to top

Should You Code JavaScript This Way?

You’ve seen that JavaScript supports object-oriented programming just fine. Although it was designed as a prototype-based language, it is flexible and powerful enough to accommodate the class-based programming style that is typically found in other popular languages. But the question is: should you code JavaScript this way? Should you code in JavaScript the way you code in C# or C++, coming up with clever ways to simulate features that aren’t there? Each programming language is different, and the best practices for one language may not be the best practices for another.

In JavaScript, you’ve seen that objects inherit from objects (as opposed to classes inheriting from classes). So it is possible that making a lot of classes using a static inheritance hierarchy is not the JavaScript way. Maybe, as Douglas Crockford says in his article "Prototypal Inheritance in JavaScript", the JavaScript way of programming is to make prototype objects, and use the simple object function below to make new objects, which inherit from that original object:

    function object(o) {
function F() {}
F.prototype = o;
return new F();
Then, since objects in JavaScript are malleable, you can easily augment the object after its creation with new fields and new methods as necessary.

This is all good, but it is undeniable that the majority of developers worldwide are more familiar with class-based programming. Class-based programming is here to stay, in fact. According to the upcoming edition 4 of ECMA-262 specification (ECMA-262 is the official specification for JavaScript), JavaScript 2.0 will have true classes. So JavaScript is moving towards being a class-based language. However, it will probably take years for JavaScript 2.0 to reach widespread use. In the meantime, it’s important to know the current JavaScript well enough to read and write JavaScript code in both prototype-based style and class-based style.

Back to top

Putting It into Perspective

With the proliferation of interactive, client-heavy AJAX applications, JavaScript is quickly becoming one of the most useful tools in a .NET developer’s arsenal. However, its prototypal nature may initially surprise developers who are more used to languages such as C++, C#, or Visual Basic. I have found my JavaScript journey a rewarding experience, although not entirely without frustration along the way. If this article can help make your experience smoother, then I’m happy, for that’s my goal.

Back to top