Thursday, January 29, 2009

Encryption in C# .NET

This class uses a symmetric key algorithm (Rijndael/AES) to encrypt and decrypt data. As long as encryption and decryption routines use the same parameters to generate the keys, the keys are guaranteed to be the same.

In a real-life application, this may not be the most efficient way of handling encryption, so as soon as you feel comfortable with it you may want to redesign this class.


public class Encryption
{
    /// <summary>
    /// Encrypts specified text using Rijndael symmetric key algorithm
    /// and returns a base64-encoded result.
    /// </summary>
    /// <param name="plainText">
    /// Plaintext value to be encrypted.
    /// </param>
    /// <param name="passPhrase">
    /// Passphrase from which a pseudo-random password will be derived. The
    /// derived password will be used to generate the encryption key.
    /// Passphrase can be any string. In this example we assume that this
    /// passphrase is an ASCII string.        
    /// </param>
    /// <param name="saltValue">
    /// Salt value used along with passphrase to generate password. Salt can
    /// be any string. In this example we assume that salt is an ASCII string.
    /// </param>
    /// <param name="hashAlgorithm">
    /// Hash algorithm used to generate password. Allowed values are: "MD5" and
    /// "SHA1". SHA1 hashes are a bit slower, but more secure than MD5 hashes.
    /// </param>
    /// <param name="passwordIterations">
    /// Number of iterations used to generate password. One or two iterations
    /// should be enough.
    ///</param>
    /// <param name="initVector">
    /// Initialization vector (or IV). This value is required to encrypt the
    /// first block of plaintext data. For RijndaelManaged class IV must be
    /// exactly 16 ASCII characters long.
    ///</param>
    /// <param name="keySize">
    /// Size of encryption key in bits. Allowed values are: 128, 192, and 256.
    /// Longer keys are more secure than shorter keys.
    ///</param>
    /// <returns>
    /// Encrypted value formatted as a base64-encoded string.
    /// </returns>
    public string Encrypt(string plainText, string passPhrase, string saltValue, string hashAlgorithm, int passwordIterations, string initVector, int keySize)
    {
        // Convert strings into byte arrays.
        // Let us assume that strings only contain ASCII codes.
        // If strings include Unicode characters, use Unicode, UTF7, or UTF8
        // encoding.
        byte[] initVectorBytes = Encoding.ASCII.GetBytes(initVector);
        byte[] saltValueBytes = Encoding.ASCII.GetBytes(saltValue);

        // Convert our plaintext into a byte array.
        // Let us assume that plaintext contains UTF8-encoded characters.
        byte[] plainTextBytes = Encoding.UTF8.GetBytes(plainText);

        // First, we must create a password, from which the key will be derived.
        // This password will be generated from the specified passphrase and
        // salt value. The password will be created using the specified hash
        // algorithm. Password creation can be done in several iterations.
        PasswordDeriveBytes password = new PasswordDeriveBytes(passPhrase, saltValueBytes, hashAlgorithm, passwordIterations);

        // Use the password to generate pseudo-random bytes for the encryption
        // key. Specify the size of the key in bytes (instead of bits).
        byte[] keyBytes = password.GetBytes(keySize / 8);

        // Create uninitialized Rijndael encryption object.
        RijndaelManaged symmetricKey = new RijndaelManaged();

        // It is reasonable to set encryption mode to Cipher Block Chaining
        // (CBC). Use default options for other symmetric key parameters.
        symmetricKey.Mode = CipherMode.CBC;

        // Generate encryptor from the existing key bytes and initialization
        // vector. Key size will be defined based on the number of the key
        // bytes.
        ICryptoTransform encryptor = symmetricKey.CreateEncryptor(keyBytes, initVectorBytes);

        // Define memory stream which will be used to hold encrypted data.
        MemoryStream memoryStream = new MemoryStream();

        // Define cryptographic stream (always use Write mode for encryption).
        CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write);
        // Start encrypting.
        cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length);

        // Finish encrypting.
        cryptoStream.FlushFinalBlock();

        // Convert our encrypted data from a memory stream into a byte array.
        byte[] cipherTextBytes = memoryStream.ToArray();

        // Close both streams.
        memoryStream.Close();
        cryptoStream.Close();

        // Convert encrypted data into a base64-encoded string.
        string cipherText = Convert.ToBase64String(cipherTextBytes);

        // Return encrypted string.
        return cipherText;
    }

    /// <summary>
    /// Decrypts specified ciphertext using Rijndael symmetric key algorithm.
    /// </summary>
    /// <param name="cipherText">
    /// Base64-formatted ciphertext value.
    ///</param>
    /// <param name="passPhrase">
    /// Passphrase from which a pseudo-random password will be derived. The
    /// derived password will be used to generate the encryption key.
    /// Passphrase can be any string. In this example we assume that this
    /// passphrase is an ASCII string.
    ///</param>
    /// <param name="saltValue">
    /// Salt value used along with passphrase to generate password. Salt can
    /// be any string. In this example we assume that salt is an ASCII string.
    ///</param>
    /// <param name="hashAlgorithm">
    /// Hash algorithm used to generate password. Allowed values are: "MD5" and
    /// "SHA1". SHA1 hashes are a bit slower, but more secure than MD5 hashes.
    ///</param>
    /// <param name="passwordIterations">
    /// Number of iterations used to generate password. One or two iterations
    /// should be enough.
    ///</param>
    /// <param name="initVector">
    /// Initialization vector (or IV). This value is required to encrypt the
    /// first block of plaintext data. For RijndaelManaged class IV must be
    /// exactly 16 ASCII characters long.
    ///</param>
    /// <param name="keySize">
    /// Size of encryption key in bits. Allowed values are: 128, 192, and 256.
    /// Longer keys are more secure than shorter keys.
    ///</param>
    /// <returns>
    /// Decrypted string value.
    /// </returns>
    /// <remarks>
    /// Most of the logic in this function is similar to the Encrypt
    /// logic. In order for decryption to work, all parameters of this function
    /// - except cipherText value - must match the corresponding parameters of
    /// the Encrypt function which was called to generate the
    /// ciphertext.
    /// </remarks>
    public string Decrypt(string cipherText, string passPhrase, string saltValue, string hashAlgorithm, int passwordIterations, string initVector, int keySize)
    {
        // Convert strings defining encryption key characteristics into byte
        // arrays. Let us assume that strings only contain ASCII codes.
        // If strings include Unicode characters, use Unicode, UTF7, or UTF8
        // encoding.
        byte[] initVectorBytes = Encoding.ASCII.GetBytes(initVector);
        byte[] saltValueBytes = Encoding.ASCII.GetBytes(saltValue);

        // Convert our ciphertext into a byte array.
        byte[] cipherTextBytes = Convert.FromBase64String(cipherText);

        // First, we must create a password, from which the key will be
        // derived. This password will be generated from the specified
        // passphrase and salt value. The password will be created using
        // the specified hash algorithm. Password creation can be done in
        // several iterations.
        PasswordDeriveBytes password = new PasswordDeriveBytes(passPhrase, saltValueBytes, hashAlgorithm, passwordIterations);

        // Use the password to generate pseudo-random bytes for the encryption
        // key. Specify the size of the key in bytes (instead of bits).
        byte[] keyBytes = password.GetBytes(keySize / 8);

        // Create uninitialized Rijndael encryption object.
        RijndaelManaged symmetricKey = new RijndaelManaged();

        // It is reasonable to set encryption mode to Cipher Block Chaining
        // (CBC). Use default options for other symmetric key parameters.
        symmetricKey.Mode = CipherMode.CBC;

        // Generate decryptor from the existing key bytes and initialization
        // vector. Key size will be defined based on the number of the key
        // bytes.
        ICryptoTransform decryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes);

        // Define memory stream which will be used to hold encrypted data.
        MemoryStream memoryStream = new MemoryStream(cipherTextBytes);

        // Define cryptographic stream (always use Read mode for encryption).
        CryptoStream cryptoStream = new CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read);

        // Since at this point we don't know what the size of decrypted data
        // will be, allocate the buffer long enough to hold ciphertext;
        // plaintext is never longer than ciphertext.
        byte[] plainTextBytes = new byte[cipherTextBytes.Length];

        // Start decrypting.
        int decryptedByteCount = cryptoStream.Read(plainTextBytes, 0, plainTextBytes.Length);

        // Close both streams.
        memoryStream.Close();
        cryptoStream.Close();

        // Convert decrypted data into a string.
        // Let us assume that the original plaintext string was UTF8-encoded.
        string plainText = Encoding.UTF8.GetString(plainTextBytes, 0, decryptedByteCount);

        // Return decrypted string.
        return plainText;
    }
}

Tuesday, January 27, 2009

Cross Page postback in ASP.NET

This is a new feature in ASP .NET 2.0. The IButtonControl Interface contains a new property called PostBackUrl which points to the page to which the current page will postback, Button, ImageButton and LinkButton implements this interface and exposes the cross page postback functionality.

When user clicks the button in the current page will postback to the specified page which can access the source page controls through Page.PreviousPage property which returns the reference of previous page, once got the reference of previous page you can use the FindControl method to get the reference of particular or you can expose public properties from source page to provide the type safe access.

For example:


Label1.Text =((TextBox)this.PreviousPage.FindControl("TextBox1")).Text;

In order to provide the strongly typed access to previous page, you can specify the previous page in PreviousPageType directive; since you have strongly typed reference of previous page you can now easily access its public members without any typecasting.

Example:

Create two aspx pages name the first one step_1.aspx and the other step_2.aspx
In the page step_1.aspx put a Text Box named TextBox1
Expose the following properties in code behind file:

public TextBox MyName { get { return TextBox1; } }

In the step_2.aspx put this code in the aspx code:

<%@ PreviousPageType VirtualPath="~/step_1.aspx" %>

Then in the page load event in step_2.aspx page write this code:

Response.Write(this.PreviousPage.MyName.Text);

Sunday, January 25, 2009

Ajax Modal popup Extender - Infinite Scroll bar in ASP.NET 3.5


I was using Ajax Modal popup Extender in a Page that uses Master Page in a Web Application I am working on it. After Modal popup extender comes up, an infinite scroll bar is appearing, and a black box appears and when I tried to scroll down the scroll bar, the background becomes editable.

It was a very strange problem for me but I was able to fix it and here is the solution:


Make sure in your Master Page has this line:



<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">


Or in your page if you are using a User Control.

Friday, January 23, 2009

The permissions granted to user 'Domain\UserName' are insufficient for performing this operation

I have faced a problem when I tried to open the Reporting Service Web Site for SQL Server 2005 running on Vista Ultimate.

As you can see from the title of the post the error was

"The permissions granted to user 'Domain\UserName' are insufficient for performing this operation"

I have installed SQL Server 2005 Service Pack 2 and grant the user the Administrator permissions and I still got this error!

Finally, you can just open the Internet Explorer with Right Click Run as Administrator and it will work.

Wednesday, January 21, 2009

Benefits of ASP.NET

ASP.NET provides many truly impressive benefits over classic ASP, mainly because it is built into the managed environment of the .NET Framework. ASP.NET applications are compiled, rather than interpreted, and they benefit from a sophisticated Class Framework that provides a powerful, extensible application programming interface (API).

At the risk of rehashing what others have already said, the benefits of ASP.NET are as follows:
  • ASP.NET applications are developed using compiled code and can be written using any .NET-compliant language, including VB .NET, C# .NET, and J# .NET. ASP.NET is built into the .NET Class Framework, so it benefits from an extensive API as well as all the benefits of the .NET managed environment, including type safety, multilanguage support, optimized memory management, and a just-in-time (JIT) compilation model that optimizes the availability and performance of the application at runtime.
  • ASP.NET provides an extensible architecture that you can easily customize. For example, every Web form can be programmatically accessed using the Page handler object. This object not only encapsulates the Web form, but it also provides direct access to the context objects, including the HttpRequest, HttpApplication, and HttpSessionState objects.
  • The ASP.NET runtime engine architecture is also designed for optimal performance and extensibility. The runtime engine executes in a separate process from Internet Information Server (IIS), and it delegates HTTP requests across a range of handler classes. These classes can be customized or even dropped from an application if they do not provide any benefit. Custom HTTP handlers process HTTP requests at the level of the ASP.NET runtime engine. You can write custom HTTP handlers using a greatly simplified API that avoids the complexity of traditional ISAPI modules.
  • ASP.NET provides several ways to cache page content, including full-page caching and partial-page caching (fragment caching). In addition, ASP.NET provides a Cache API for even more granular control over cached content.
  • ASP.NET provides sophisticated debugging and tracing abilities that make application troubleshooting easier than before. The .NET-compliant languages provide structured error handling and a large set of detailed exception classes. Errors will still occur in ASP.NET, of course, but you now benefit from more detailed exception reporting than when using classic ASP.
  • ASP.NET provides several methods for authenticating clients, including Windows authentication, Forms-based authentication, Passport-based authentication, and standard IIS authentication (impersonation).
Quoted.

Monday, January 19, 2009

Monitoring Performance

We have established that performance monitoring does not just begin in a vacuum. It begins once the developers and system architects have released the first version of an application and have collected initial performance measurements.

Presumably the team has also established its performance expectations for the Web application and already has a sense of whether additional optimization will be required. The initial baseline is a collection of measurements on specific performance factors. These factors combine to produce the overall responsiveness of the application on a specific hardware platform, and under a specific load.

Performance profiling involves three important steps:
  • Monitoring: Monitoring includes setting up the counters and traces to collect performance data and picking a sampling period.
  • Analysis: The monitoring data must be collected, analyzed for problems, and compared against the baseline data.
  • Loading, or stress testing: This involves a forceful ramping up of load on the Web application to observe how the performance metrics change in response. Monitoring by itself can be a passive activity where measurements are collected and analyzed, but the system is allowed to operate without interference by the development team.
Once performance profile reports have been generated and analyzed, the technical team may choose to optimize a specific area of the application or the system. Once the optimization has been implemented, the iterative cycle of profiling and optimization begins again for as long as it takes to bring the application performance within an acceptable range.

Optimization is never really completed. It continues for as long as necessary, or for as long as the team thinks that further performance improvements are possible. Ultimately, the goal is to bring performance to a level where the users are satisfied and to have the technical team feel satisfied they have delivered an application whose performance meets everyone's expectations.

Quoted.

Saturday, January 17, 2009

Measuring Application Performance

You can measure application performance by three broad measures:
  • Throughput: Throughput is the number of requests that a Web application can serve in a specified unit of time. Throughput is typically specified in requests per second.
  • Availability: Availability is the percentage of time a Web application is responsive to client requests.
  • Scalability: Scalability is the ability of a Web application to maintain or improve performance as the user load increases. Scalability also refers to the ability of an application to recognize performance benefits as server resources increase.
Throughput was already discussed, so let's explore the additional measures, availability and scalability, in more detail.

Assessing Availability

Of course, performance is not the only metric that matters for ASP.NET applications. Application availability is equally important and is defined as the percentage of time that an application is functional. Availability is in many ways a harder metric to quantify, compared to performance, because hardware issues factor into the equation more than software issues do.

The factors that affect application availability include the following:
  • Hardware: Web servers and database servers obviously have to remain running for the hosted Web application to stay available. Multiserver Web architectures are designed for fault tolerance, usually by providing redundancy and backup drives both for the application files and for the database files.
  • Load: Overtaxed systems are susceptible to failure if the user load exceeds what either the hardware or the software was designed to accommodate. Load can be anticipated through capacity planning and designed for at the software and the hardware levels.
  • Network latency: This factor refers to delays in the transmission of a request or a response on the network. Latency may result from congestion on the network. Alternatively, it may result from an inefficient network—one that requires too many jumps between the sender and the receiver. Network latency is controllable on a local area network (LAN) or a virtual private network (VPN), but it is out of your control over a public Internet connection.
  • Connection bandwidth: Application users on the public Internet may not have the same amount of connection bandwidth; for example, broadband users have "a lot," and dial-up users have "very little." It is hard to say much about this factor, given that it is typically out of the developer's control for a Web application on the public Internet. About the only good thing to say about this factor is that users tend to adjust their expectations in proportion to the type of connection they are using. In other words, dial-up users expect to wait longer, and cable modem users do not.
  • Software: Software issues typically affect the performance of an application, rather than its availability. However, code that causes an application to become unstable and crash is an important factor in application availability. In general, code that is thread-safe is stable and unlikely to crash. Thread-safe code is much easier to write with .NET because the managed runtime environment enforces both a common type system and a range of rules that promote thread safety. Keep in mind, though, that calling COM+ components from .NET code is potentially unstable because COM+ components execute outside of the managed execution environment.
The most common way to quantify availability is by uptime, which is the percentage of time that an application is responsive and functional. There is no typical acceptable uptime percentage for an application. The acceptable number is the one that all parties agree is reasonable and can be committed to in a legal contract. Companies typically like to see more than 99-percent uptime, excluding scheduled downtime.

On an annual basis, this number is not as unreasonable as it might appear to be. For example, 99-percent uptime translates to a whopping 88 hours, or roughly two standard work weeks per year of downtime. In economic terms, a high-traffic e-commerce Web application can lose a lot of revenue in this time period, particularly if it falls around a heavy shopping period such as Christmas. Figure 1 illustrates a sampling of downtime in hours, based on percentage uptimes.

Figure 1: Application availability

Assessing Scalability

The third of the big three metrics, after throughput and availability, is scalability. Many people confuse this factor with performance. The two are related, but they are not the same. Scalability is the ability of the application to perform under ever-increasing loads. A Web application is considered non-scalable if its performance falls below established standards once the load starts to increase.

Scalability also has a lesser-known definition, which is of particular importance to developers. Namely, scalability is the ability of an application to fully utilize the full processing power of a machine as it increases the number of processors. By this definition, an application that performs well on a two-processor machine is considered non-scalable if a four-processor machine fails to improve application performance by any significant amount.

This definition speaks to a number of low-level issues, such as the ability of both the application and the machine to work effectively using multiple threads. The .NET Framework provides sophisticated thread management capabilities and makes it easier than before to write thread-safe code. However, you may not achieve scalability simply by using the Framework's out-of-the-box thread management abilities. The schematic chart shown in Figure 2 illustrates two applications deployed on machines with increasing numbers of processors (X-axis).

The Y-axis indicates the requests per second that the applications are able to process. In this example, the number of users, or the load, is assumed to remain the same. The chart illustrates that Application #1 experiences much smaller performance gains than Application #2 as the number of processors increases. This implies that Application #2 is more scalable than Application #1.

Even so, neither application experiences any performance improvements in moving from a four-processor to an eight-processor machine. Scalability is clearly a parameter that is relative rather than absolute. Application #2 is more scalable than Application #1 as long as the number of processors remains fewer than eight. Application #2 may perform better than Application #1 on an eight-processor machine; however, it is no more scalable than Application #1 at this number of processors.


Figure 2: Application scalability

The .NET Framework provides other out-of-the-box features that may enhance scalability as an application experiences increasing loads. For example, the ADO.NET managed data providers implement connection pooling for database connections, which at face value would appear to always be a good thing. This is not so if your application uses dynamic connection strings, where pooling may actually hinder performance for a specific connection even while helping performance on others.

So, although the .NET Framework provides the tools for enhancing scalability, you need the smarts of a developer to take full advantage of them. Keep in mind that scalability works in two dimensions: up and out. Traditional scalability actually refers to scaling up, meaning that your application must accommodate increasing load on a fixed set of servers.

In this model, the processing is distributed across the same set of servers regardless of whether the load is high or low. Scaling out refers to applications designed to run across a server farm, where multiple servers collaborate to share the burden of increasing load. In this model, when the application load is low, just a few of the available servers handle the processing.

As the load increases, additional servers take on the burden, which effectively increases the available capacity of the system. Thus, architects and developers have a two-fold challenge in designing for scalability: designing their applications to both scale up and scale out. ASP.NET facilitates this effort by making certain management features less dependent on specific servers.

For example, you can easily configure state management to run across several servers in a farm. In addition, XML-based configuration files make it easier to point applications to distributed resources. One final note on the topic of scalability: Throwing hardware at an application usually buys you performance gains, but this approach should complement other measures, not replace them. For example, doubling the memory on a Web server will certainly result in immediate performance gains for many kinds of Web applications, but this will do nothing to address bottlenecks that may exist at the processor level or at the database level.

The database server is, after all, an equally important partner to the Web server in terms of its influence on scalability. Similarly, scaling out with additional Web servers will buy you perceived performance gains because more servers are now available to share the processing load. But, again, this approach will not address processor-level bottlenecks. Worse yet, if your application is experiencing memory leaks, then by scaling out additional servers you have essentially increased your problem by transferring an existing issue from a small number of servers to a larger number of servers.

Hardware considerations are an important aspect of designing a high-performance Web application. The hardware configuration is critical for maintaining high reliability and availability for an application. Basically, do not focus on hardware considerations at the expense of application-level issues because in the long-term, scalability will not benefit, even if short-term scalability does.

Quoted.

Thursday, January 15, 2009

Profiling ASP.NET Application Performance

Performance profiling is a complicated process that requires developers to run through repeated iterations of profiling and optimization. There are a number of factors affecting performance that operate independently and so must be tackled independently, often by different groups of people.

Developers must tackle application coding issues by identifying the offending code blocks and then rewriting or optimizing them. System administrators must tackle server resource problems by examining the full set of tasks that the server is handling. Performance profiling has to be a cooperative task between different groups of people because at the end of the day, a user who experiences slow performance will be unhappy with the experience, regardless of whose piece of the puzzle is causing the issue. Performance profiling affects everyone on the technical team, and so it must engage everyone as well.

Performance profiling is a time-dependent activity because application performance changes over time. Performance may change periodically throughout the day or as the load on the application fluctuates. Alternatively, application performance may experience degradation over a long period of time, particularly as the database grows. Performance profiling must begin with a baseline, which is a set of metrics that define the performance at a specific time, under a specific set of conditions.

The baseline is the measure against which future performance will be compared. A baseline may also be referred to as a benchmark. A baseline includes a description of both hardware and software, and it typically includes a range of performance numbers that were derived without changing the hardware configuration.

This is an example of a baseline description:
  • The Web application runs on a single, dual-processor 400MHz server with 512MB of RAM. The application was subjected to an increasing load of 10 to 100 concurrent users with five threads. The application serves between 18 and 45 requests per second, diminishing with an increasing number of concurrent users. Response times for 10 concurrent users varied between 0.8 and 3.5 seconds, depending on the page. The Home page, which is cached, responds in 0.8 seconds, and more intensive screens take longer to deliver to the client.
Figure 1 shows the baseline application performance's throughput, and Figure 2 shows the baseline application performance's response time.

Figure 1: Baseline application performance—throughput

Figure 2: Baseline application performance—response time

These graphs focus on just two metrics—namely, throughput and response time. A complete baseline should contain data for several metrics but should always include basic metrics such as throughput and response time because these most directly affect the user's experience with the application.

Quoted.

Tuesday, January 13, 2009

Performance Tuning and Optimization for ASP.NET Applications Serise

I am going to post a series of articles that explain how can you tune and optimize your web application.
The purpose of this article series is to provide you with a primer on the concepts and terminology of application performance and optimization.

For many of you, these are concepts that are familiar, but with which you are not necessarily comfortable. Application performance issues are often paid little attention in smaller Web applications with low hit counts. However, these issues start to feel more important as your application starts experiencing higher hit counts and heavier loads. It is an unfortunate reality that application performance behaves in a nonlinear way.

To borrow a phrase from the stock market, this means that current (application) performance is not an indicator of future results. Loosely speaking, performance refers to an application's ability to service user requests under varying load conditions. Performance is measured by multiple kinds of metrics, including throughput and response time, to name a few. Performance benchmarks describe the goal you are trying to achieve. Important benchmark indicators include scalability and availability.

Optimization refers to fine-tuning an application for performance, based on your goals and the expected load on the application. Optimization is generally an iterative process where you apply your knowledge of the technology to address bottlenecks in performance.

The iterative aspect of the process comes about through a cycle of testing and tweaking until you have achieved your performance goals for the application. We cover all of these topics in detail, both later in this chapter as well as throughout this book.

Application performance issues are often ignored until the last minute, precisely when the application may already be experiencing heavy loads. Applications should be designed for optimum performance as a forethought, not as an afterthought. Optimization, by its nature, is a process that applies to a completed application. It is also an ongoing process that does not stop once the application is in production. So, there are different aspects of designing for performance.

Quoted.

Sunday, January 11, 2009

Understanding the ASP.NET Architecture Part - 4

View State


View state is an ASP.NET feature that allows a Web page to retain all of its state values between requests to the Web server. In classic ASP, developers are forced to handle this task manually, which is a tedious and time-consuming process.

Consider one example, where the user fills out a form with several HTML input text controls.
Once the form is submitted to the server, the ASP engine retrieves the control contents from the HTTP headers and processes the business logic.

When the page returns to the client, the control values (including user input) are not retained, unless the developer has manually rehydrated the control values using embedded server-side code within the HTML form.

This issue is a regular headache for form-based applications because the server commonly needs to return control values to the client for a second look—for example, if the user input fails server-side validation.
Input controls represent one of the simpler problems.

Drop-down list controls pose a more complicated problem because they contain multiple values and include a user-selected value. Typically, drop-down lists need to be hydrated just once, and they must then retain the same values and user selections.

Classic ASP requires you to manually repopulate the select box contents between postings to the server.
This requirement is not just tedious for the developer, but it can also be expensive for the server, especially if additional database lookups are required.

View state enables all controls on a Web page to retain their state values between successive postbacks to the server. Furthermore, view state preserves a control's properties between postbacks. For example, if a control's Visible property is set to "False," then the control will remain invisible between successive postbacks.

In short, view state saves developers time by reducing the amount of coding they have to do. In addition, view state improves application performance by eliminating the database calls that would otherwise be needed to rehydrate controls between postbacks.

How View State Works


View state is stored using encoded key-value pairs within a hidden form field. Every control on the page is represented using one or more pairs. For view state to work, a Web page must contain a server-side form (with the runat=server attribute) so that a hidden _VIEWSTATE field may be added directly below the opening tag.

(A server-side form is provided by default when you create a new Web page). View state only works for server-side controls contained within the server-side form. In fact, server-side controls will generate compilation errors if they are added to a page outside of their server-side form.

For a Web page with view state enabled, the view state process works as follows:
  1. The client requests a Web page.
  2. The server renders the page, including a _VIEWSTATE hidden field that contains encoded key-value pairs for the properties and values of every control on the page.
  3. The client enters information into the rendered HTML controls and then posts the page back to the server.
  4. The server initializes the page and then tracks the client's changes to control state. The server executes the business logic on the page and then renders the page. The server controls get rehydrated using the latest values stored in the _VIEWSTATE hidden field, which is updated to include the new, client-specified values and selections.
You manage view state using the StateBag class, which is a member of the System.Web.UI namespace.
The Page object provides access to a StateBag object through its ViewState property.

The StateBag class implements a number of interfaces, including the IDictionary and IEnumerable interfaces, which allow you to enumerate and modify the view state contents. Table 3 describes important members of the StateBag class.

Table 3: The StateBag Class Members
CLASS MEMBER
DESCRIPTION
Add
This method adds a new StateItem object to the StateBag object or updates an existing StateItem object value if it is already included in the StateBag class. The StateItem object represents a view state name-value pair.
Remove
This method removes an object from the StateBag object.
Keys
This property gets a collection of (enumerable) keys that represent the items in the StateBag object.
Item
This property gets or sets the value of an object in the StateBag object.

You can add or modify view state items from the Web form code-behind file up until the Pre-Render stage of the page's lifecycle, which is just before the page starts rendering HTML. For example, you can append custom view state values that are distinct from any of the page controls:


protected override void OnPreRender(EventArgs e)
{
 // Store custom ViewState values
 ViewState("Key1") = "MyValue1";
 ViewState.Item("Key2") = "MyValue2";  // Alternate notation
}

In fact, you can add any object to view state, as long as it supports binary serialization. Serialization is the process by which binary data is converted into a stream of bytes in order to be saved to a storage medium. Serializable objects, such as the DataView object, implement the ISerializable interface.

Non-serializable objects, such as the ListItem object, do not implement this interface. As you can see, you can add serializable objects to view state easily:

// ViewState will store any serializable object
DataView dv;
ViewState.Add["Mydv", dv];

Retrieving objects from view state is just as simple:

if (!Page.IsPostBack)
{
 // Retrieve the DataView object from ViewState
 sqlDV = (DataView)ViewState["Mydv"];
}

Persisting View State Across Multiple Pages


In special cases, view state may persist values across multiple pages, not just across postbacks for the same page. Consider an application with two Web pages, where each page contains a TextBox server control named TextBox1.

If Page 1 submits to Page 2, then the TextBox control on Page 2 will pick up the view state for the Page 1 TextBox control. You can use this behavior to your advantage if you want to persist common information across multiple pages, as long as the Web application uses posting between pages, rather than hyperlinks.

For example, consider an application where every page contains a Label server control for persisting the client's login name. This is the design view for Page 1:

In the code-behind file, you would assign the login name to the Label control:

public class ViewState : System.Web.UI.Page
{
    protected System.Web.UI.WebControls.Label lblUserName;

    protected void Page_Load(object sender, EventArgs e)
    {
        if (!Page.IsPostBack)
        {
            // Assign Username to label (Username is hardcoded for demo purposes)
            lblUserName.Text = "DemoUser";
        }
    }
}

Then on subsequent Pages 2, 3, and so forth, you can automatically pick up the Label value from view state by doing just two things:
  1. Add a Label server control named "lblUserName".
  2. Add a variable declaration for the label in the code-behind file for the server control.
The only "coding" you need to do is to ensure that every page declares the server control variable:

protected System.Web.UI.WebControls.Label lblUserName;

The inverse behavior also applies: The view state will clear for all server controls that are not repeated between pages. Custom view state values will only persist for as long as the client continues to post back the same page.

Disabling View State


ASP.NET server controls have their view state enabled by default, but you can disable it for those controls that do not need to retain their values between postbacks. For example, DataGrid controls that are rehydrated on every postback donot need to retain their view state.

You can disable view state for a server control in two ways:
  • At design-time, by setting its EnableViewState attribute equal to "False":
  • At runtime, although make sure you do so the first time the page loads:

lblUserName.EnableViewState = false;

You can also disable view state for an entire page using the @ Page directive:

<%@ Page Language="C#" EnableViewState="false" CodeFile="Default.aspx.cs" Inherits="_Default" %>

Finally, you can disable view state for an entire application using the Web.config configuration file. To do so, simply modify the configuration element's enable ViewState attribute value:

Keep in mind that when view state is completely disabled, you will have to do a lot of work to manage state manually.

Quoted.

Friday, January 9, 2009

Understanding the ASP.NET Architecture Part - 3

The Page Class


A Page object is instanced every time an *.aspx page is requested. The Page object is responsible for processing a client request and rendering HTML in response. The Page object provides programmatic access to a Web form, plus access to the HTTP intrinsic objects such as HttpRequest and HttpResponse.
Every *.aspx page is associated with an instance of the Page class.

Specifically, the Web page inherits from a code-behind class, which in turn inherits from the Page class.
For example, the login.aspx page contains the following directive at the top of the file:

<%@ Page Language="C#" Codebehind="login.aspx.cs" Inherits="login" %>

Then, when you switch to the code-behind file, you see the following:

public class Login : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {

    }
}

Note that the Codebehind attribute simply indicates the location of the code-behind file. The Inherits attribute is what actually binds the Web form to a specific class. The @ Page directive supports a long list of attributes that control the Web form's behavior at runtime.

Several of these attributes override application-level settings in the Web.config file. For example, if the Web application has session state enabled, you can prevent an individual page from participating by setting its EnableSessionState directive to "False."

The @ Page Directive


The @ Page directive provides the three required attributes shown in the previous listing: Language, Codebehind, and Inherits. In addition, many others are set to "True" by default, which means they apply to the page even if they are missing from the @ Page directive. So, the @ Page directive is as much about disabling what you do not want as it is about enabling what you need.

You should always explicitly set three attributes:

  • AutoEventWireUp: This is a Boolean attribute that indicates whether Page events are wired into specific delegate functions ("True") or whether they can be wired into user-defined functions ("False"). If the attribute value is "True" (or, by default, if the attribute is missing), then the Page_Init() and Page_Load() event handlers will always be called, and they must be declared in a standard way. If the attribute value is "False," then the events are handled only if the user chooses, and they can be delegated to any function that supports the right interface for the event. Remember, Page events will always raise, but this does not mean you have to devote code and processing time to responding to the events. We always set this attribute value to "False."
  • EnableViewState: View state allows server controls to persist their contents and selected values between postings. View state is enabled by default for an entire page, but because it can have performance implications, you should set the Boolean attribute explicitly, even if you plan to keep view state enabled ("True"). View state is convenient, but it is not always needed. Typically, it is most convenient to keep view state intact for the overall page and disable it for individual controls.
  • EnableViewStateMac: This Boolean attribute indicates whether the view state contents should be encrypted and whether the server should inspect them for evidence of tampering on the client. The Mac portion of the attribute name stands for Machine Authentication Check. Many users have run into page loading problems when this attribute is omitted (which sets the attribute to "True," by default). Many users set this attribute value to "False." Microsoft hastens to point out that MAC encryption is not a replacement for a certificate-based encryption system such as Secure Socket Layer (SSL). If tampering is an issue for you, consider implementing a certificate-based encryption system rather than relying on MAC encoding.

Page Class Members


The Page class members roughly fall into three groups. The first group includes the properties and methods that manipulate the Web form controls. The second group includes properties that access the ASP Intrinsic objects. The third group includes the Page lifecycle events . Table 1 describes important members of the Page class that fall into the first two groups.

Table 1: The Page Class Members

CLASS MEMBER
DESCRIPTION
Controls
[Property] A collection of Control objects hosted on the page. You can iterate through the collection using for-each-next syntax. For example, to print out control type details, use this code:

foreach (Control objItem in Page.Controls)
{
    Console.WriteLine(objItem.GetType.ToString());
}

FindControl
[Method] Retrieves an object reference for a specific control on the page, using its ID. For example:

TextBox MyCtl = Page.FindControl("txt1");

HasControls
[Method] Determines if a server control contains child controls. For example, a GridView control may contain embedded (child) controls, such as textboxes and buttons.
IsPostBack
[Property] A Boolean value that indicates whether the current GET or POST request results from the current page posting back to itself. This property is typically checked in the Page_Load() event. If view state is enabled, this property often indicates that the page should be processed but not re-rendered. Because view state preserves the original contents of the Page controls, you will get duplicate items in the controls if they are re-rendered without first clearing the existing items. (This generalization may not apply to your page.)
Request
[Property] Gets a reference to the current HttpRequest instance, which encapsulates the client's request details.
Response
[Property] Gets a reference to the current HttpResponse instance, which encapsulates the server response details.
Application
[Property] Gets a reference to the Application object for the current request (which wraps the HttpApplicationState class). This class enables global application information to be shared across multiple requests and sessions.
Session
[Property] Gets a reference to the Session object for the current request (which wraps the HttpSessionState class). This class provides access to session-specific settings and values. ASP.NET provides several modes for storing session state.

Page Lifecycle Events


Recall that the Page object is responsible for processing a client request and rendering HTML in response. The Page object runs through a specific set of lifecycle stages as it fulfills its responsibilities. Several of these stages are associated with events that you can capture and code behind.

The Page class itself inherits from the Control and TemplateControl classes, both of which are members of the System.Web.UI namespace. The Control class provides a common set of properties, methods, and events that are shared by all server controls.

The TemplateControl class provides additional base functionality for the Page class. Together, these two classes provide the base events that the Page class raises, as well as base methods that the Page class can override.

Table 2 summarizes the more important lifecycle stages and their associated events, or the methods that can be overridden, for adding code to specific stages.

Table 2: The Page Lifecycle Stages
STAGE
DESCRIPTION
EVENT OR METHOD?
Initialize
Initializes settings.
The Page_Init event
Load Performs actions for all requests, including initializing server controls and restoring theirstate. The Page object provides an .Is PostBack property that will fire for posts (value is "False") and reposts (value is "True"). The .IsPostBack property allows you to set up conditional logic in the Load() event handler for handling the first post vs. subsequent reposts. You can check for postback data and can view state information for the Page's child controls. The Page_Load() event
Pre-Render The stage where remaining updates are performed prior to saving view state and rendering the form. You can add custom values to view state at this stage. The Page_PreRender() event
Save View State The stage where view state information is persisted to a hidden field on the form. The SaveStateComplete() method
Dispose
Releases resources and performs final cleanup, prior to unloading the Page object.
The Page_Disposed() event
Unload This stage is where the Page object is unloaded from server memory. You can perform cleanupand release resources in this stage, butdevelopers generally perform these tasks in the Page_Disposed() event.
The Page_Unload() event

The descriptions in Table 2 are specific to the Page object, but many of the events and methods apply equally to any server control. This should be of no surprise, given that the Page class inherits from the Control class, which is common to all server controls.

In addition, the Page object acts as a container for a collection of server controls, all of which run through their own processing stages. There is a complicated interplay between the controls' execution orders and the Page execution order.

This sequencing is of particular concern when you are developing a custom server control that handles postbacks and participates in view state.

Quoted.

Wednesday, January 7, 2009

Understanding the ASP.NET Architecture - Part 2

HTTP Handlers


HTTP handlers implement a common interface called IHttpHandler (a member of the System.Web namespace).

There are two kinds of handler classes:
  • Handler processing classes: These are classes that implement the interface (IHttpHandler) that allows them to process HTTP requests. For example, the Page class is a handler that represents an *.aspx Web page.
  • Handler factory classes: These are classes that dynamically manufacture new handler classes. These classes implement the IHttpHandlerFactory interface. For example, the PageHandlerFactory class generates a Page handler class for every HTTP request that calls an *.aspx Web page.
The IHttpHandler interface defines a method called ProcessRequest(), which accepts an HttpContext instance as its argument. The interface also provides a Boolean property called IsReusable(), which dictates whether an HTTP handler can pass a request to another handler:


void ProcessRequest(HttpContext context);
bool IsReusable { get; }

The HttpContext object encapsulates all of the HTTP-specific details for an individual HTTP request. Clearly, the ProcessRequest() method provides a clean way of passing a client's request details between different objects on the Web server.

HTTP handlers must be registered in the ASP.NET configuration files (either Machine.config or Web.config). This is an excerpt from the Machine.config file:

<httphandlers>
  <add verb="*" path="trace.axd" type="System.Web.Handlers.TraceHandler">
  <add verb="*" path="*.aspx" type="System.Web.Handlers.TraceHandler">
</httphandlers>

The <httphandlers></httphandlers> section registers HTTP handlers on the server using child elements. A handler class can process an individual Uniform Resource Identifier (URI) or a group of URIs that share a common file extension. The Machine.config file excerpt shown previously illustrates both options. The attributes for the element are as follows:
  • Verb: This is a comma-separated list of HTTP verbs, including GET, POST, PUT, or the wildcard asterisk (*).
  • Path: This is a single URI path or a wildcard path, for example, *.aspx.
  • Type: This is a class/assembly combination that contains the handler class. The excerpt only shows the class name, but you can append the assembly name as well, separated by a comma.
By now you can begin to appreciate how extensible the ASP.NET runtime engine is. You can route HTTP requests to any handler you set up. Few readers will ever need to create a custom handler because you can almost always tackle a specific HTTP request through the Page object (discussed next). But in other cases, HTTP handlers are the most efficient way to handle an HTTP request because they can service a request without going to the expense of loading up a Page object.

Let's look at an interesting example. Consider a Web application that logs the Internet Protocol (IP) address and a timestamp of all clients when they first access the application. Let's say that the application provides an entry page, called gateway.aspx, that provides no user interface but that records the client's IP address with a timestamp and then redirects the client on to a formal login page.

You could create a standard *.aspx Web page that performs this function, but this approach unnecessarily creates an instance of the Page object (assuming you have not altered the standard HTTP handler for .aspx pages). A better approach would be to create a custom HTTP handler that processes the gateway.aspx page directly. If you want even more distinction, you could create a custom file extension, such as .xyz, for the gateway page.

Listing 1: A Custom HTTP Handler:

public class oHandler : IHttpHandler
{
    public void ProcessRequest(HttpContext context)
    {
        // Instance an EventLog object
        EventLog objEvt = new EventLog();
        try
        {
            //Write the client's IP address to the event log,
            //with a timestamp
            string strClientIP = context.Request.UserHostAddress;
            objEvt.Source = "ASP.NET";

            // Event source is ASP.NET
            objEvt.WriteEntry("Client IP: " + strClientIP + " logged at: " + Now);

            //Redirect the client to the login page
            context.Response.Redirect("login.aspx");
        }
        catch (Exception err)
        {
            //No action taken. Prevents unhandled errors if
            //.WriteEntry fails.
        }
    }

    public bool IsReusable
    {
        get
        {
            return (true);
        }
    }
}

Listing 1 is very simple. The client's IP address is extracted using the Request object's UserHostAddress property. Next, the handler writes a record into the system event log. Finally, the handler redirects the user to the formal login page, called login.aspx. Clearly, the HttpContext object is critical for an HTTP handler class to work!

Notice that Listing 1 includes exception handling around the WriteEntry() method in case this method fails. The exception handler does not do anything in that it does not take a specific action when an exception occurs. However, it does prevent unhandled exceptions, which could potentially bring down the Web site.

This may happen if the site administrator fails to reconfigure the event logs from the default overwrite after 7 days to overwrite as needed. If the event logs fill up completely, then a write failure will cause an exception in the code, with potentially adverse effects. This is of particular concern in high-volume sites that write a lot of information to the event logs.

Next, you must register the HTTP handler class in the Web.config (or Machine.config) file:

<httphandlers>
  <add verb="*" path="gateway.aspx" type="oHandler">
</httphandlers>

The syntax for the class name and the assembly is important to get right; otherwise the Web application will generate runtime errors when you attempt to load it in the browser.

Finally, you can test the HTTP handler by opening a new browser and typing the path to the gateway page—in this case, http://localhost/WebSite1/gateway.aspx. The page will load, then after a couple of seconds you should be redirected to the login.aspx page. Open the application log in the Event Viewer, and you will see a recent information record with something like the following contents:
Client IP: 127.0.0.1 signed in at: 1/1/2009 7:19:45 PM.


The most interesting aspect of this example is that the page gateway.aspx does not exist. We never added one to the ASP.NET project, nor do we need to add one. The HTTP handler recognizes the path name and takes that as an indication to start working. The actual page does not need to exist, and this is what makes HTTP handler classes so efficient under certain circumstances.

Quoted.

Hire Me

Follow me on Facebook

Follow me

Do you find this Blog helpful?

Follow by Email

About Me

My Photo
Expert Senior Software Developer

Microsoft Business Card