Sunday, November 29, 2009

XML Data and Document Storage in SQL Server 2005

XML is a platform-independent data representation format based originally on SGML. Since its popularization, it is becoming used as a data storage format. It has its own type system, based on the XML Schema Definition language (XSD). Both XML and XSD are W3C standards at the Recommendation level. An XML schema defines the format of an XML document as a SQL Server schema defines the layout of a SQL Server database.

The XML type system is quite rigorous, enabling definition in XML Schema Definition language of almost all of the constructs available in a relational database. Because it was originally designed as a system that could represent documents with markup as well as what is traditionally thought of as data, the XML type system is somewhat bifurcated into attributes and elements. Attributes are represented in the XML serialization format as HTML attributes are, using the name=’value’ syntax.

Attributes can hold only simple data types, like traditional relational attributes. Elements can represent simple or complex types. An element can have multiple levels of nested sub elements. This means that an element can be used to represent a table in a relational database. Each row would be represented as a child element, with relational attributes (columns) represented as either attributes or sub elements.

The two ways of representing relational column data in XML are known as element-centric mapping (where each column is a nested subelement) and attribute-centric mapping (where each column is an attribute on an element row) Since sub elements can be nested in XML documents, a document more closely corresponds to a hierarchical form of data than a relational form.

This is reinforced by the fact that, by definition, an XML document must have a single root element. Sets of elements that do not have a single root element are called document fragments. Although document fragments are not well formed XML documents, multiple fragments can be composed together and wrapped with a root element, producing a well-formed document. In addition to being able to represent relational and hierarchical data, XML Schema Definition language can represent complex type relationships.

XSD supports the notion of type derivation, including derivation by both restriction and extension.
This means that XML can directly represent types in an object hierarchy. A single XML schema document (which itself is defined in an XML form specified by the XML Schema Definition language) represents data types that scope a single XML namespace, although you can use XML namespaces in documents without having the corresponding XML schema. An XML namespace is a convenient grouping of types, similar to a userschema in SQL Server.

An XML schema defines the namespace that its types belong to by specifying the target Namespace attribute on the schema element. An XML document that uses types from a namespace can indicate this by using a default namespace or explicitly using a namespace prefix on each element or attribute of a particular type. Namespace prefixes are arbitrary the xmlns attribute established the correspondence between  namespace prefix and namespace.

This is analogous to using SQL Server two-part or three-part names in SQL statements. Only when an XML document contains types defined by XML schemas is it possible to determine the exact data types of elements or attributes. XML elements and attributes are data type string by definition. A predecessor to XML schemas, known as Document Type Definition (DTD), was primarily concerned with defining document structure and allowed only limited information about data types.

XSD is a superset of the aforementioned type systems, including all the DTD structure types. Using an XSD schema or schemas to determine if a document is correct is known as schema validation. Schema validation can be thought of as applying type constraints and declarative integrity constraints to ensure that an XML document is correct.

A nonvalidated XML schema still must conform to XML well-formedness rules, and a single XML document adheres to a set of rules known as the XML Information Set (Infoset), consisting of structure and some content information. Validating an XML document against schemas produces what is called a Post-Schema-Validation InfoSet (PSVI). The PSVI information makes it possible to determine a strong, well-defined type for each XML element and attribute.

SQL Server 2005 introduces an XML data type. This data type can be used in table definitions to type a column, as a variable type in Transact-SQL procedural code, and as procedure parameters. In addition, columns, variables, and parameters of the XML data type can be constrained by an XML schema.
XML schemas are defined in the SQL Server catalog. XML, like relational databases, has its own query language optimized for the data format.

Since XML data is hierarchical, it’s reminiscent of a hierarchical file system. The archetypical query language for XML documents is known as XPath. Queries in XPath reflect the hierarchical nature of XML, since nodesets are selected by using syntax similar to that used to specify files in the UNIX file system. As an example, when a typical XML document is queried using a hierarchical XPath query, the result is a nodeset ontaining all the nodes at that level of hierarchy.

Like a SQL query, an XPath query simply produces a resultset consisting of possibly multiple instances of items. Unlike in SQL, these results are not always rectangular in shape. XPath results can consist of nodesets of any shape or even scalar values. In SQL, database vendors can implement a variation of SQL-PSM (persistent stored modules) that composes possibly multiple SQL queries and some procedural code to produce a more complex result.

SQL Server’s variation of SQL-PSM is known as Transact-SQL. XML processing libraries implement an analogous concept by using an XML-based nonprocedural language called XSLT. Originally meant to produce nice looking HTML pages from XML input, XSLT has evolved into an almost full-fledged programming language. Vendors have even added proprietary extensions to XSLT to allow it to execute code routines in procedural programming languages like Visual Basic or C#.

Since XPath and XSLT were not originally developed to process large amounts of data or data from multiple sources, a new programming language for XML, known as XQuery, has been developed. XQuery implements many of the best features of XPath and XSLT, and is developed from the ground up to allow queries that include multiple documents. It is also designed specifically to be optimizable. In addition, it adds some of the syntax features of SQL.

XQuery’s data can be strongly typed; this also assists in query optimization. XQuery includes a query language, the equivalent of SQL Server SELECT, but does not define a standard implementation of DML, SQL Server’s INSERT, UPDATE, and DELETE statements. SQL Server 2000 allowed users to define mapping schemas (normal XML schemas with extra annotations that mapped XML items and concepts to SQL items and concepts) that represented all or a portion of the database as a virtual XML document, and issue XPath queries against the resulting data structure.

In addition, SQL Server 2000 extended Transact-SQL to enable relational resultsets to be returned as XML. This consists of support for a FOR XML clause. Three different subcategories of FOR XML are supported. The SQL Server 2000 support allowed XML document composition from relational data and XML document decomposition into multiple relational tables.

SQL Server 2005 extends this support by adding direct support for XQuery. The XQuery engine runs directly inside SQL Server, as opposed to XPath support in SQL Server 2000. XPath support in SQL Server 2000 is accomplished by a portion of the SQL Server OLE DB provider (SQLOLEDB) that took a mapping schema and an XPath query, produced a SELECT... FOR XML query and sent that to SQL Server.

Native support for XQuery, combined with XQuery’s design for optimization, and support for multiple documents (a series of XML columns) should improve on the already good support for querying XML data.

Source: https://www.nilebits.com/blog/2009/12/xml-data-and-document-storage-in-sql-server/

Monday, August 17, 2009

What is the Web 2.0 term means?

The term “Web 2.0” defines a set of principles and practices for web applications, which, when followed, entitle a web application to wear the Web 2.0 crown.
A web site can claim to be a Web 2.0 site if it:
  • Allows users to control data presented on the web site
  • Presents a platform that enables the mixing (or mash-up) of technologies and data
  • Enables services to be consumed that are beyond the boundary of the application
  • Harnesses collective intelligence by enabling the following:
1 - Aggregation of relevant content from hetero generous sources.
2 - User contributed content.
3 - User moderation of content via tagging and rating.
  • Uses state-of-the-art technologies that take interactivity on the Web to the next level by use of popular technologies like Ajax, Flash, and Silver light.
Drop things, being a web portal, allows a user to control what the user wants to put on the page. The widget architecture allows mash up of technologies in the form of widgets. It exposes web services that external entities can consume.

The portal aggregates content from many different sources, such as photos from Flickr, news from CNN, weather reports from Weather.com, and many more. It supports user submitted content aggregation in the form of RSS feeds.

Finally, it pushes interactivity on the Web to the next level by using Ajax technologies.

Source: https://www.nilebits.com/blog/2009/11/what-is-the-web-2-0-term-means/

Thursday, August 13, 2009

What is a Web Portal?

A web portal is a page that allows a user to customize his homepage by dragging and dropping widgets onto it. This approach gives the user complete control over what content he sees on his home page, where on the page he wants to see it, and how he wants to interact with it.

A widget is a discrete piece on a web page that performs a particular function and comes with its own UI and set of features. Examples of widgets include to-do lists, address books, contact lists, RSS feeds, clocks, calendars, playlists, stock tickers, weather reports, traffic reports, dictionaries, games, or almost anything you can imagine that can be packaged up and dropped onto a web page.

In a corporate environment, widgets can connect to internal systems; for example, an expense tracker widget can interact directly with the internal accounting system. If you are familiar with the SharePoint Portal, then you already know about widgets, which are called Web Parts in SharePoint and ASP.NET 2.0.

Source: https://www.nilebits.com/blog/2009/10/what-is-a-web-portal/

Tuesday, August 11, 2009

What is the difference between Convert.ToInt32() and Int32.Parse()

Convert.ToInt32(string); and Int32.Parse(string); produce the same results except when the string is actually a null.

In this case, Int32.Parse(null) throws an ArgumentNullException, but Convert.ToInt32(null) returns a zero.

So which one is better to use?

Waiting for your comments...

Wednesday, August 5, 2009

How to Create a GroupBox in ASP.NET

Simple just use an ASP.NET Panel Control and use the Grouping Text Property.
<asp:panel id="Panel1" runat="server" groupingtext="Group 1" height="355px" />

Saturday, August 1, 2009

ASP.NET State Management

In an ASP.NET Web application, information about the current user, his preferences, and the application's current configuration are stored as values of global variables. This information is also stored as controls and properties of objects in memory to be used by the application until it is released or terminated. Such information is collectively referred to as the state of a Web page.

To preserve the state information of a Web page between round-trips or page postbacks, ASP.NET provides a user with several server-based as well as client-based techniques. The process of maintaining the state for a Web page across round-trips or page postbacks is referred to as state management.

The following describes several server-based and client-based techniques for managing state of a Web page:
  • Client-based Technique for State Management: Client-based techniques maintain state of a Web page by storing information either on the page or on a client computer. If the information is stored on the client, it is submitted to a Web server by the client with each Web request.
    The following are the client-based techniques for state management:

    • View State: View state is used to persist changes to the state of a Web page across postbacks. View state data variables are stored as base64-encoded strings in one or more hidden fields. It is accessed by using the ViewState property of a Web page. The property provides a dictionary object and preserves page or control property values between multiple user requests for the same Web page. When a Web page is processed, ASP.NET collects all the current page or control property variables, formats them into an encoded string and saves in the page as a hidden form field named _VIEWSTATE. At the server-side, ASP.NET decodes the view state string during page initialization and restores property information in the page.

      However, view state has some drawbacks. It increases the size of the HTML file and the amount of time to load the page. To eliminate these drawbacks, ASP.NET provides a disabling feature that does not enable view state at various levels. However, view state information for a Web page or control is not retained when the view state is disabled at the page level.

    • Control State: View state information of a custom control for a Web page can be stored by using the ControlState property, instead of using the ViewState property of the Web page. The ControlState property retains control property information during multiple round trips to the server. The control state data is specific to a custom control and is retained even if the view state is disabled at the page level.

    • Hidden Fields: A hidden field is used to store ViewState state information in a Web page in a HiddenField control. The control is rendered as an HTML element. A hidden field contains information that does not display on a Web page. However, it is sent to the Web server along with the page postbacks.

      However, this technique has some drawbacks. Users cannot see the hidden fields on the Web page. Instead, they can only see the values of the hidden fields from the HTML source page. Therefore, only properties can be set in the hidden field. A hidden field can hold only a single value. Therefore, several such hidden fields will be required to store structured values such as records of a customer.

    • Cookies: A cookie is a client-based technique and is a small packet of information that stores key-value pairs at the client-side. The information is associated with a specific domain and is sent along with the client request on a Web browser. Cookies store preferences of users and provide them with personalized browsing experience.

      However, there are some limitations regarding this technique. Most Web browsers restrict the information size in a cookie. Some users do not accept cookies while configuring their browsers whereas, some users request the Web browser to persist cookies only for a specified period, so that the browser cannot use its own rules for cookie expiration. The cookies stored at the client-side are not secure, as a user may tamper with the information received from them.

    • Query Strings: A query string is a client-based technique that maintains state information stored in a query string by appending it to the URL of a Web page. The actual URL separates the state information by a question mark '?'. The state data is represented by a set of key-value pairs, each of which is separated by an ampersand character. The following is the HTML element tag for a query string on a Web page:

      WebPage.aspx?ID=query_string

      For example: http://www.amrsaafan.net/query.aspx?FirstName=Amr&LastName=Saafan&City=Cairo&Country=Egypt

      The query string technique is simple to use and is widely used when small amount of information is required in a Web page's URL. However, there are certain drawbacks for using this type of technique. Most Web browsers limit the amount of state information in a query string to 256 characters. Data stored in a query string does not support structured data values. Data stored in a query string is also not secured, as it is visible to a user on a Web page.

  • Server-based Technique for State Management: Server-based techniques maintain the state of a Web page by storing information on a server. The server stores state information and also tracks the client information by using the client-side techniques for state management.
    The following are server-based techniques for state management:

    • Application State.
    • Session State.
    • Profile Properties.
Quoted.

Saturday, June 13, 2009

SQL Server as a Runtime Host Part 3

There are three categories of access security for managed code. These are SAFE, EXTERNAL_ACCESS, and UNSAFE, which we mentioned previously with respect to class loading. This allows the DBA to determine if an assembly should be permitted certain privileges while knowing the risks.

These categories equate to SQL Server–specific permission sets using code access security concepts. Having stated this, there is no specific enhancement to the CLR past the normal stack walk that intercepts all privileged operations as defined by the CLR permissions model and enforces user permissions. For ensuring the integrity of userpermissions defined in the database, we depend on the principal execution context of the stored procedure or user-defined function in combination with database roles.

I have spoken of AppDomains quite a bit in previous articles. It’s time to describe exactly what they are and how SQL Server uses them. In .NET, processes can be subdivided into pieces known as application domains, or AppDomains. Loading the runtime loads a default AppDomain; user or system code can create other AppDomains.

AppDomains are like lightweight processes themselves with respect to code isolation and marshaling. This means that object instances in one AppDomain are not directly available to other AppDomains by means of memory references; the parameters must be “marshaled up” and shipped across. In .NET, the default is marshal-by-value; a copy of the instance data is made and shipped to the caller.

Another choice is marshal-by-reference, in which the caller gets alocator or “logical pointer” to the data in he callee’s AppDomain, and subsequent use of that instance involves a cross AppDomain trip. This isolates one AppDomain’s state from others. Each process that loads the .NET Framework creates a default AppDomain.

From this AppDomain, you can create additional AppDomains programmatically, like this:
public static int Main(string[] argv)
    {
        // Create domain
        AppDomain child = AppDomain.CreateDomain("dom2");

        // Execute yourapp.exe
        int entryPoint = child.ExecuteAssembly("yourapp.exe", null, argv);

        // Unload domain
        AppDomain.Unload(child);

        return entryPoint;
    }
Although there may be many AppDomains in a process, AppDomains cannot share class instances without marshaling. SQL Server does not use the default AppDomain for database processing, although it is used to load the runtime. Exactly how AppDomains are allocated in SQL Server 2005 is opaque to and not controllable by the user or DBA; however, by observation, in the beta version of SQL Server 2005, it can be determined that a separate AppDomain will be created for each database for running that database’s code.

Executing the system function master.sys.fn_appdomains() shows the AppDomains in the SQL Server process when more than one combination is in use. In the beta 1 version, the AppDomains were named “databasename.number”-for example, “AdventureWorks.2.” This effectively isolates each database’s user code from the others, albeit at the cost of more virtual memory. In beta 2, AppDomains may be allocated based on the identity of the user owning the assembly, possibly resulting in more AppDomains, but isolating each assembly owner’s code.

This effectively prevents using reflection to circumvent SQL Server permissions without the overhead of intercepting each call. The runtime-hosting APIs also support the concept of domain-neutral code. Domain-neutral code means that one copy of the Just-In-Time compiled code is shared across multiple AppDomains.
Although this reduces the working set of the process because only one copy of the code and supported structures exists in memory, it is a bit slower to access static variables, because each AppDomain must have its own copy of static variables and this requires the runtime to add a level of indirection.

There are four domain-neutral code settings:

 1. No assemblies are domain neutral.
 2. All assemblies are domain neutral.
 3. Only strongly named assemblies are domain neutral.
 4. The host can specify a list of assemblies that are domain neutral.

SQL Server 2005 uses the fourth option it will only share a set of Framework assemblies. It doesn’t share strongly named user assemblies, because it means user assemblies that happen to be strongly named can never be unloaded. AppDomains do not have a concept of thread affinity; that is, all AppDomains share the common CLR thread pool.

This means that although object instances must be marshaled across AppDomains, the marshaling is more lightweight than COM marshaling, for example, because not every marshal requires a thread switch. This also means it is possible to delegate the management of all threads to SQL Server while retaining the existing marshaling behavior with respect to threads.

Source: https://www.nilebits.com/blog/2009/09/sql-server-as-a-runtime-host/

Tuesday, June 9, 2009

SQL Server as a Runtime Host Part 2

Since in SQL Server users are not allowed to run arbitrary programs for reliability reasons, code (an assembly) is loaded a little differently than in other runtime hosts. The user or DBA must preload the code into the database and define which portions are invocable from Transact-SQL.

Preloading and defining code uses ordinary SQL Server Data Definition Language (DDL). Loading code as a stream of bytes from the database rather than from the file system makes SQL Server’s class loader unique. The class libraries that make up the .NET Framework are treated differently from ordinary user code in that they are loaded from the global assembly cache and are not defined to SQL Server or stored in SQL Server.

Some portions of the base class libraries may have no usefulness in a SQL Server environment (for example, System.Windows.Forms); some may be dangerous to the health of the service process when used incorrectly (System.Threading) or may be a security risk (portions of System.Security). The architects of SQL Server 2005 have reviewed the class libraries that make up the .NET Framework, and only those deemed relevant will be enabled for loading.

This is accomplished by providing the CLR with a list of libraries that are OK to load. SQL Server will take the responsibility for validating all user libraries, to determine that they don’t contain non-read-only static variables, for example. SQL Server does not allow sharing state between user libraries and registers through the new CLR hosting APIs for notification of all interassembly calls.

In addition, user libraries are divided into three categories by degree of danger; assemblies can be assigned to a category and use only the appropriate libraries for that category.
Because code in SQL Server must be reliable, SQL Server will only load the exact version of the Framework class libraries it supports.

This is analogous to shipping a particular tested version of ADO with SQL Server. Multiple versions of your code will be able to run side by side (though this was not enabled in beta 1), but the assemblies must be defined with different SQL Server object names.

Source: https://www.nilebits.com/blog/2009/09/sql-server-as-a-runtime-host/

Sunday, June 7, 2009

SQL Server as a Runtime Host Part 1

If you are a SQL Server developer or database administrator, you might just be inclined to use the new Common Language Runtime (CLR) hosting feature to write stored procedures in C# or VB.NET without knowing how it works. But you should care. SQL Server is an enterprise application, perhaps one of the most important in your organization. When the CLR was added to SQL Server, there were three goals in the implementation, considered in this order:

1. Security
2. Reliability
3. Performance

The reasons for this order are obvious. Without a secure system, you have a system that runs reliably run code, including code introduced by hackers, very quickly. It’s not what you’d want for an enterprise application. Reliability comes next. Critical applications, like a database management system, are expected to be available 99.99% of the time.

You don’t want to wait in a long line at the airport or the bank while the database restarts itself. Reliability is therefore considered over performance when the two clash; a decision might be whether to allow stack overflows to potentially bring down the main application, or slow down processing to make sure they don’t. Since applications that perform transactional processing use SQL Server, SQL Server must ensure data integrity and its transactional correctness, which is another facet of reliability.

Performance is extremely important in an enterprise application as well. Database management systems can be judged on benchmarks, such as the TPC-C (Transaction Processing Performance Council benchmark C) benchmark, as well as programmer-friendly features. So although having stored procedures and user-defined types written in high-level languages is a nice feature, it has to be implemented in such a way as to maximize performance.

Since SQL Server 2005 is going to introduce fundamental changes such as loading .NET runtime engines and XML parsers, we’ll first consider how SQL Server 2005 works as a .NET runtime host, how it compares with other .NET runtime hosts, and what special features of the runtime are used to ensure security, reliability, and performance.

You may already know that an updated version of the .NET runtime, .NET 2.0, will be required for use with SQL Server. In this article, I will explain why. A runtime host is defined as any process that loads the .NET runtime and runs code in a managed environment. The most common scenario is that a runtime host is simply a bootstrap program that executes from the Windows shell, loads the runtime into memory, and then loads one or more managed assemblies.

An assembly is the unit of deployment in .NET roughly analogous to an executable program or DLL in prior versions of Windows. A runtime host loads the runtime by using the ICorRuntimeHost or CorBindToRuntimeEx, prior to Whidbey. These APIs call a shim DLL, MSCOREE.DLL, whose only job is to load the runtime.

Only a single copy of the runtime (also known as the CLR) engine can ever be loaded into a process during the process’s lifetime; it is not possible to run multiple versions of the CLR within the same host. In pre-Whidbey versions of .NET, a host could specify only a limited number of parameters to ICorRuntime Host or CorBindToRuntimeEx, namely the following:

• Server or workstation behavior
• Version of the CLR (for example, version 1.0.3705.0)
• Garbage collection behavior
• Whether or not to share Just-In-Time compiled code across
AppDomains (an AppDomain is a subdivision of the CLR runtime space)

Two examples of specialized runtime hosts are the ASP.NET worker process and Internet Explorer. The ASP.NET worker process differs in codelocation and how the executable code, threads, and AppDomains are organized. The ASP.NET worker process divides code into separate “applications,” application being a term that is borrowed from Internet Information Server to denote code running in a virtual directory. Code is located in virtual directories, which are mapped to physical directories in the IIS metabase.

Internet Explorer is another runtime host with behaviors that differ from the ASP.NET worker or SQL Server 2005. IE loads code when it encounters a specific type of <object> tag in a Web page.
The location of the code is obtained from an HTML attribute of the tag. SQL Server 2005 is an example of a specialized runtime host that goes far beyond ASP.NET in specialization and control of CLR semantics.

SQL Server’s special requirements of utmost security, reliability, and performance, in addition to the way that SQL Server works internally, have necessitated an overhaul in how the managed hosting APIs work as well as in how the CLR works internally. Although early versions of SQL Server 2005 did run on .NET version 1.0, the changes in the CLR are important in ensuring enterprise quality.

SQL Server is a specialized host like ASP.NET and IE, rather than a simple bootstrap echanism. The runtime is lazy loaded; if you never use a managed stored procedure or user-defined type, the runtime is never loaded. This is useful because loading the runtime takes a one-time memory allocation of approximately 10–15MB in addition to SQL Server’s buffers and unmanaged executable code, although this certainly will not be the exact number in SQL Server 2005.

How SQL Server manages its resources and locates the code to load is unique as well. To accommodate hosts that want to have hooks into the CLR’s resource allocation and management, .NET 2.0 hosts can use ICLRRuntimeHost instead of ICorRuntimeHost. The host can then call ICLRRuntimeHost::SetHostControl, which takes a pointer to an interface (IHostControl) that contains a method that the CLR can call (GetHostManager) to delegate things like thread management to the host.

SQL Server uses this interface to take control of some functions that the CLR usually calls down to the operating system for directly. SQL Server manages its own thread scheduling, synchronization and locking, and memory allocation. In .NET runtime hosts, these are usually managed by the CLR itself.
In SQL Server 2005 this conflict is resolved by layering the CLR’s mechanisms on top of SQL Server’s mechanisms.

SQL Server uses its own memory allocation scheme, managing real memory rather than using virtual memory. It attempts to optimize memory, balancing between data and index buffers, query caches, and other internal data structures. SQL Server can do a better job if it manages all of the memory in its process.

As an example, prior to SQL Server 2000, it was possible to specify that the TEMPDB database should be allocated in memory. In SQL Server 2000 that option was removed, based on the fact that SQL Server can manage this better than the programmer or DBA. SQL Server manages its memory directly by, in effect, controlling paging of memory to disk itself rather than letting the operating system do it.

Because SQL Server attempts to use as much memory as is allocated to the process, this has some repercussions in exceptional condition handling, which I will discuss next. SQL Server also uses its own thread management algorithm, putting threads “to sleep” until it wants them to run. This facility is known as UMS (user-mode scheduler).

Optionally, SQL Server can use fibers rather than threads through a configuration option, though this option is rarely used. The CLR also maintains thread pools and allows programmers to create new threads. The key point is that SQL Server uses cooperative thread scheduling; the CLR uses preemptive thread scheduling.

Cooperative thread scheduling means that a thread must voluntarily yield control of the rocessor; in preemptive thread scheduling the processor takes control back from the thread after its time slice has expired. SQL Server uses cooperative thread scheduling to minimize thread context switches.

With threading come considerations of thread synchronization and locking. SQL Server manages locking of its own resources, such as database rows and internal data structures. Allowing programmers to spin up a thread is not to be taken lightly. SQL Server 2005 CLR code executes under different permission levels with respect to CLR activities.

The hosting APIs in .NET 2.0 are enhanced to enable the runtime host to either control or “have a say in” resource allocation. The APIs manage units of work called Tasks, which can be assigned to a thread or a fiber. The SQL scheduler manages blocking points, and hooks PInvoke and interop calls out of the runtime to control switching the scheduling mode.

The new control points allow SQL Server to supply a host memory allocator, to be notified of low memory conditions at the OS level, and to fail memory allocations if desired. SQL Server can also use the hosting API to control I/O completion ports usually managed by the CLR.

Although this may slow things down a little in the case of an allocation callback, it is of great benefit in allowing SQL Server to manage all of its resources, as it does today. In .NET 1.0 certain exceptional conditions, such as an out-of-memory condition or a stack overflow, could bring down a running process (or App Domain). This cannot be allowed to happen in SQL Server.

Although transactional semantics might be preserved, reliability and performance would suffer dramatically.
In addition, unconditionally stopping a thread (using Thread.Abort or other API calls) can conceivably leave some system resources in an indeterminate state and, though using garbage collection minimizes memory leakage, leak memory. Different runtime hosts deal with these hard-to-handle conditions in different ways.

In the ASP.NET worker process, for example, recycling both the AppDomain and the process itself is considered acceptable since disconnected, short-running Web requests would hardly notice. With SQL Server, rolling back all the in-flight transactions might take a few minutes. Process recycling would ruin long-running batch jobs in progress.

Therefore, changes to the hosting APIs and the CLR exceptional condition handling needed to be made. Out-of-memory conditions are particularly difficult to handle correctly, even when you leave a safety buffer of memory to respond to them. In SQL Server the situation is exacerbated because SQL Server manages its own memory and attempts to use all memory available to it to maximize throughput.

This leaves us between a rock and a hard place. As we increase the size of the “safety net” to handle out-of-memory conditions, we also increase the occurrence of out-of-memory conditions. The Whidbey runtime handles these conditions more robustly; that is, it guarantees availability and reliability after out-of-memory conditions without requiring SQL Server to allocate a safety net, letting SQL Server tune memory usage to the amount of physical memory.

The CLR will notify SQL Server about the repercussions of failing each memory request. Low-memory conditions may be handled by permitting the garbage collector to run more frequently, waiting for other procedural code to finish before invoking additional procedures, or aborting running threads if needed.

There is also a failure escalation policy at the CLR level that will allow SQL Server to determine how to deal with exceptions. SQL Server can decide to abort the thread that causes an exception and, if necessary, unload the AppDomain. On resource failures, the CLR will unwind the entire managed stack of the session that takes the resource failure.

If that session has any locks, the entire AppDomain that session is in is unloaded. This is because having locks indicates there is some shared state to synchronize, and thus that shared state is not likely to be consistent if just the session was aborted. In certain cases this might mean that finally blocks in CLR code may not run.

In addition, finalizers, hooks that programmers can use to do necessary but not time-critical resource cleanup, might not get run. Except in UNSAFE mode (discussed later in the chapter), finalizers are not permitted in CLR code that runs in SQL Server.

Stack overflow conditions cannot be entirely prevented, and are usually handled by implementing exceptional condition handling in the program. If the program does not handle this condition, the CLR will catch these exceptions, unwind the stack, and abort the thread if needed. In exceptional circumstances, such as when memory allocation during a stack overflow causes an out-of-memory condition, recycling the App Domain may be necessary.

In all the cases just mentioned, SQL Server will maintain transactional semantics. In the case of AppDomain recycling, this is needed to assert that the principal concern is reliability, if needed, at the expense of performance. In addition, all the Framework class libraries (FX libraries) that SQL Server will load have gone through a careful review and testing to ensure that they clean up all memory and other esources after a thread abort or an AppDomain unload.

Source: https://www.nilebits.com/blog/2009/09/sql-server-as-a-runtime-host/

Monday, June 1, 2009

System Metadata Tables and INFORMATION_SCHEMA in SQL Server 2005

Information about assemblies as well as the assembly code itself and the dependencies is stored in the system metadata tables, which, in general, store information about SQL Server database objects, such as tables and indexes. Some metadata tables store information for the entire database instance and exist only in the  MASTER database; some are replicated in every database, user databases as well as MASTER. The names of the tables and the information they contain are proprietary.

System metadata tables are performant, however, because they reflect the internal data structures of SQL Server. In the big rewrite that took place in SQL Server 7, the system metadata tables remained intact. In SQL Server 2005, the metadata tables have been overhauled, revising the layout of the metadata information and adding metadata for new database objects.

In addition, programmers and DBAs can no longer write to the system metadata. It is really a read only view. The SQL INFORMATION_SCHEMA, on the other hand, is a series of metadata views defined by the ANSI SQL specification as a standard way to expose metadata.

The views evolve with the ANSI SQL specification; SQL:1999 standard INFORMATION_SCHEMA views are a superset of the SQL-92 views. SQL Server 2000 supports the INFORMATION_SCHEMA views at the SQL-92 standard level; some of the SQL:1999 views may be added in SQL Server 2005.
SQL Server is, so far, the only major database to support the INFORMATION_SCHEMA views.

Getting Metadata from SQL Server:
-- this uses the system metadata tables
SELECT * FROM sysobjects WHERE [type] = 'U'
-- this uses the INFORMATION_SCHEMA
SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'

SQLServer 2005 includes a reorganization of the system metadata tables. This includes renaming the tables to use an arbitrary schema (named SYS) as well as table renames and reorganization of some of the information. The goal, once again, is speed and naming consistency.

The equivalent query to the previous two using the new system metadata tables would be as follows:
SELECT * FROM SYS.TABLES
Note that the information returned by all three queries differs both in the number of columns returned, the column names used, and the information in the resultset. Information about assemblies and the assembly code itself is stored in three metadata tables. These tables exist on per database, since assemblies are scoped to the database and schema.

Sys.assemblies stores information about the assembly itself as well as schema_id, assembly_id, and the .NET version number. The assembly dependencies are stored in sys.assembly_references, one row per assembly-reference pair. Finally, the assembly code itself is cataloged in sys.assembly_files.

In all cases, this table contains the actual code rather than the name of the file where the code resided when it was cataloged. The original file location is not even kept as metadata. In addition, if you have added a debugger file, using the ALTER ASSEMBLY ADD FILE DDL statement, the debug information will appear as an additional entry in the sys.assembly_files table.

Notice that you can define an assembly that is “invisible” with respect to defining routines and types to the runtime. Lack of visibility is the default when SQL Server loads dependent assemblies of an assembly defined using CREATE ASSEMBLY. You might do this, for example, to define a set of utility routines to be invoked internally only.

If you specify IsVisible=true (the default) this means that methods and types in this assembly can be declared as SQL Server methods and types, either through the “list” properties or directly through DDL.

Source: https://www.nilebits.com/blog/2009/08/system-metadata-tables-and-information_schema-in-sql-server/

Monday, March 23, 2009

Visual Studio 2005/2008 on Vista Internet Explorer cannot display the webpage

I faced a strange problem with Visual Studio 2005 and Visual Studio 2008 which is I have several ASP.NET Projects some of them using .NET 2.0 or Visual Studio 2005 and the others are using .NET 3.5 or Visual Studio 2008.

The problem was when I try to run the Web Application I got this error:

Internet Explorer cannot display the webpage

I am not using IIS for these Web Application. I have tryied the usual solutions such as Restart the Visual Studio, Restart my Laptop and Reinstall Visual Studio but nothing solve this problem. After searching for a solution for this problem I finally got the answer.

All you have to do is editing the host file located here %windir%\System32\drivers\etc\hosts

Open this file with Note Pad don't forget to right Click and Run as administrator. Just comment the line that contains localhost as shown below:

No restart is needed, either of computer or Visual Studio.

# Copyright (c) 1993-2006 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# 127.0.0.1 localhost
# ::1 localhost

Source: https://www.nilebits.com/blog/2009/07/visual-studio-2005-2008-on-vista-internet-explorer-cannot-display-the-webpage/

Tuesday, March 3, 2009

Establishing a Formal Programmers Syndicate in Egypt

In this post we will discuss everything about establishing a Formal Syndicate or Union for Programmers in Egypt.

Please start to leave comments.

Monday, February 9, 2009

Where is the best place to store the connection string?

I think the best place to store the connection string is the Web.Config File if you are developing a Web Application or the App.Config File if you are developing a Windows Application.

You have another opinion? Leave a comment.

Thursday, February 5, 2009

Subtracting Two Float Digits in .NET

I have faced a very strange issue with the .NET which was subtracting 2 Float digits "1.5 - 1.1" the result should be 0.4 but it was 0.39999999999999991 . This happens because floating point numbers can not be precisely represented in binary, such as 1/4 the result will be 0.

To solve this issue all you have to do is using round function in Math class provided by .NET Framework. So if you do this Math.Round(1.5-1.1,1) the result would be as expected 0.4 .Be careful when dealing with float and decimal especially if you are working on financial application.

Source: https://www.nilebits.com/blog/2009/06/subtracting-two-float-digits-in-net/

Tuesday, February 3, 2009

How to Access controls on Master Pages from Content Pages in ASP .NET

We will assume that we have a Label control called "Label1" on your Master Page, you might want to change the text of that Label.
To do that write this code in your content page:
Label lbl = new Label();
lbl = (Label)Master.FindControl("Label1");
lbl.Text = "Text";
Of course, this is all assuming the label is OUTSIDE the ContentPlaceHolder

Source: https://www.nilebits.com/blog/2003/05/how-to-access-controls-on-master-pages-from-content-pages-in-asp-net/

Sunday, February 1, 2009

How to Secure Session State in ASP.NET

The information in session state is very secure, because it is stored exclusively on the server. However,the cookie with the session ID can easily become compromised. This means an eavesdropper could steal the cookie and assume the session on another computer.

Several workarounds address this problem. One common approach is to use a custom session module that checks for changes in the client’s IP address. However, the only truly secure approach is to restrict session cookies to portions of your website that use SSL.

That way, the session cookie is encrypted and useless on other computers. If you choose to use this approach, it also makes sense to mark the session cookie as a secure cookie so that it will be sent only over SSL connections.

That prevents the user from changing the URL from https:// to http://, which would send the cookie without SSL. Here’s the code you need:
Request.Cookies["MySessionId"].Secure = true;
Typically, you’ll use this code immediately after the user is authenticated. Make sure there is at least one piece of information in session state so the session isn’t abandoned (and then re-created later). Another related security risk exists with cookieless sessions. Even if the session ID is encrypted, a clever user could use a social engineering attack to trick a user into joining a specific session.

All the malicious user needs to do is feed the user a URL with a valid session ID. When the user clicks the link, he joins that session. Although the session ID is protected from this point onward, the attacker now knows what session ID is in use and can hijack the session at a later time.

Taking certain steps can reduce the likelihood of this attack. First, when using cookieless sessions, always set regenerateExpiredSessionId to true. This prevents the attacker from supplying a session ID that’s expired. Next, explicitly abandon the current session before logging in a new user.

Quoted.

Thursday, January 29, 2009

Encryption in C# .NET

This class uses a symmetric key algorithm (Rijndael/AES) to encrypt and decrypt data. As long as encryption and decryption routines use the same parameters to generate the keys, the keys are guaranteed to be the same.

In a real-life application, this may not be the most efficient way of handling encryption, so as soon as you feel comfortable with it you may want to redesign this class.
public class Encryption
{
    /// 
    /// Encrypts specified text using Rijndael symmetric key algorithm
    /// and returns a base64-encoded result.
    /// 
    /// 
    /// Plaintext value to be encrypted.
    /// 
    /// 
    /// Passphrase from which a pseudo-random password will be derived. The
    /// derived password will be used to generate the encryption key.
    /// Passphrase can be any string. In this example we assume that this
    /// passphrase is an ASCII string.        
    /// 
    /// 
    /// Salt value used along with passphrase to generate password. Salt can
    /// be any string. In this example we assume that salt is an ASCII string.
    /// 
    /// 
    /// Hash algorithm used to generate password. Allowed values are: "MD5" and
    /// "SHA1". SHA1 hashes are a bit slower, but more secure than MD5 hashes.
    /// 
    /// 
    /// Number of iterations used to generate password. One or two iterations
    /// should be enough.
    ///
    /// 
    /// Initialization vector (or IV). This value is required to encrypt the
    /// first block of plaintext data. For RijndaelManaged class IV must be
    /// exactly 16 ASCII characters long.
    ///
    /// 
    /// Size of encryption key in bits. Allowed values are: 128, 192, and 256.
    /// Longer keys are more secure than shorter keys.
    ///
    /// 
    /// Encrypted value formatted as a base64-encoded string.
    /// 
    public string Encrypt(string plainText, string passPhrase, string saltValue, string hashAlgorithm, int passwordIterations, string initVector, int keySize)
    {
        // Convert strings into byte arrays.
        // Let us assume that strings only contain ASCII codes.
        // If strings include Unicode characters, use Unicode, UTF7, or UTF8
        // encoding.
        byte[] initVectorBytes = Encoding.ASCII.GetBytes(initVector);
        byte[] saltValueBytes = Encoding.ASCII.GetBytes(saltValue);

        // Convert our plaintext into a byte array.
        // Let us assume that plaintext contains UTF8-encoded characters.
        byte[] plainTextBytes = Encoding.UTF8.GetBytes(plainText);

        // First, we must create a password, from which the key will be derived.
        // This password will be generated from the specified passphrase and
        // salt value. The password will be created using the specified hash
        // algorithm. Password creation can be done in several iterations.
        PasswordDeriveBytes password = new PasswordDeriveBytes(passPhrase, saltValueBytes, hashAlgorithm, passwordIterations);

        // Use the password to generate pseudo-random bytes for the encryption
        // key. Specify the size of the key in bytes (instead of bits).
        byte[] keyBytes = password.GetBytes(keySize / 8);

        // Create uninitialized Rijndael encryption object.
        RijndaelManaged symmetricKey = new RijndaelManaged();

        // It is reasonable to set encryption mode to Cipher Block Chaining
        // (CBC). Use default options for other symmetric key parameters.
        symmetricKey.Mode = CipherMode.CBC;

        // Generate encryptor from the existing key bytes and initialization
        // vector. Key size will be defined based on the number of the key
        // bytes.
        ICryptoTransform encryptor = symmetricKey.CreateEncryptor(keyBytes, initVectorBytes);

        // Define memory stream which will be used to hold encrypted data.
        MemoryStream memoryStream = new MemoryStream();

        // Define cryptographic stream (always use Write mode for encryption).
        CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write);
        // Start encrypting.
        cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length);

        // Finish encrypting.
        cryptoStream.FlushFinalBlock();

        // Convert our encrypted data from a memory stream into a byte array.
        byte[] cipherTextBytes = memoryStream.ToArray();

        // Close both streams.
        memoryStream.Close();
        cryptoStream.Close();

        // Convert encrypted data into a base64-encoded string.
        string cipherText = Convert.ToBase64String(cipherTextBytes);

        // Return encrypted string.
        return cipherText;
    }

    /// 
    /// Decrypts specified ciphertext using Rijndael symmetric key algorithm.
    /// 
    /// 
    /// Base64-formatted ciphertext value.
    ///
    /// 
    /// Passphrase from which a pseudo-random password will be derived. The
    /// derived password will be used to generate the encryption key.
    /// Passphrase can be any string. In this example we assume that this
    /// passphrase is an ASCII string.
    ///
    /// 
    /// Salt value used along with passphrase to generate password. Salt can
    /// be any string. In this example we assume that salt is an ASCII string.
    ///
    /// 
    /// Hash algorithm used to generate password. Allowed values are: "MD5" and
    /// "SHA1". SHA1 hashes are a bit slower, but more secure than MD5 hashes.
    ///
    /// 
    /// Number of iterations used to generate password. One or two iterations
    /// should be enough.
    ///
    /// 
    /// Initialization vector (or IV). This value is required to encrypt the
    /// first block of plaintext data. For RijndaelManaged class IV must be
    /// exactly 16 ASCII characters long.
    ///
    /// 
    /// Size of encryption key in bits. Allowed values are: 128, 192, and 256.
    /// Longer keys are more secure than shorter keys.
    ///
    /// 
    /// Decrypted string value.
    /// 
    /// 
    /// Most of the logic in this function is similar to the Encrypt
    /// logic. In order for decryption to work, all parameters of this function
    /// - except cipherText value - must match the corresponding parameters of
    /// the Encrypt function which was called to generate the
    /// ciphertext.
    /// 
    public string Decrypt(string cipherText, string passPhrase, string saltValue, string hashAlgorithm, int passwordIterations, string initVector, int keySize)
    {
        // Convert strings defining encryption key characteristics into byte
        // arrays. Let us assume that strings only contain ASCII codes.
        // If strings include Unicode characters, use Unicode, UTF7, or UTF8
        // encoding.
        byte[] initVectorBytes = Encoding.ASCII.GetBytes(initVector);
        byte[] saltValueBytes = Encoding.ASCII.GetBytes(saltValue);

        // Convert our ciphertext into a byte array.
        byte[] cipherTextBytes = Convert.FromBase64String(cipherText);

        // First, we must create a password, from which the key will be
        // derived. This password will be generated from the specified
        // passphrase and salt value. The password will be created using
        // the specified hash algorithm. Password creation can be done in
        // several iterations.
        PasswordDeriveBytes password = new PasswordDeriveBytes(passPhrase, saltValueBytes, hashAlgorithm, passwordIterations);

        // Use the password to generate pseudo-random bytes for the encryption
        // key. Specify the size of the key in bytes (instead of bits).
        byte[] keyBytes = password.GetBytes(keySize / 8);

        // Create uninitialized Rijndael encryption object.
        RijndaelManaged symmetricKey = new RijndaelManaged();

        // It is reasonable to set encryption mode to Cipher Block Chaining
        // (CBC). Use default options for other symmetric key parameters.
        symmetricKey.Mode = CipherMode.CBC;

        // Generate decryptor from the existing key bytes and initialization
        // vector. Key size will be defined based on the number of the key
        // bytes.
        ICryptoTransform decryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes);

        // Define memory stream which will be used to hold encrypted data.
        MemoryStream memoryStream = new MemoryStream(cipherTextBytes);

        // Define cryptographic stream (always use Read mode for encryption).
        CryptoStream cryptoStream = new CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read);

        // Since at this point we don't know what the size of decrypted data
        // will be, allocate the buffer long enough to hold ciphertext;
        // plaintext is never longer than ciphertext.
        byte[] plainTextBytes = new byte[cipherTextBytes.Length];

        // Start decrypting.
        int decryptedByteCount = cryptoStream.Read(plainTextBytes, 0, plainTextBytes.Length);

        // Close both streams.
        memoryStream.Close();
        cryptoStream.Close();

        // Convert decrypted data into a string.
        // Let us assume that the original plaintext string was UTF8-encoded.
        string plainText = Encoding.UTF8.GetString(plainTextBytes, 0, decryptedByteCount);

        // Return decrypted string.
        return plainText;
    }
}

Source: https://www.nilebits.com/blog/2009/04/encryption-in-c-net/

Tuesday, January 27, 2009

Cross Page postback in ASP.NET

This is a new feature in ASP .NET 2.0. The IButtonControl Interface contains a new property called PostBackUrl which points to the page to which the current page will postback, Button, ImageButton and LinkButton implements this interface and exposes the cross page postback functionality.

When user clicks the button in the current page will postback to the specified page which can access the source page controls through Page.PreviousPage property which returns the reference of previous page, once got the reference of previous page you can use the FindControl method to get the reference of particular or you can expose public properties from source page to provide the type safe access.

For example:
Label1.Text =((TextBox)this.PreviousPage.FindControl("TextBox1")).Text;

In order to provide the strongly typed access to previous page, you can specify the previous page in PreviousPageType directive; since you have strongly typed reference of previous page you can now easily access its public members without any typecasting.

Example:

Create two aspx pages name the first one step_1.aspx and the other step_2.aspx
In the page step_1.aspx put a Text Box named TextBox1
Expose the following properties in code behind file:
public TextBox MyName { get { return TextBox1; } }

In the step_2.aspx put this code in the aspx code:
<%@ PreviousPageType VirtualPath="~/step_1.aspx" %>
Then in the page load event in step_2.aspx page write this code:
Response.Write(this.PreviousPage.MyName.Text);

Source: https://www.nilebits.com/blog/2009/03/cross-page-postback-in-asp-net/

Sunday, January 25, 2009

Ajax Modal popup Extender - Infinite Scroll bar in ASP.NET 3.5


I was using Ajax Modal popup Extender in a Page that uses Master Page in a Web Application I am working on it. After Modal popup extender comes up, an infinite scroll bar is appearing, and a black box appears and when I tried to scroll down the scroll bar, the background becomes editable.

It was a very strange problem for me but I was able to fix it and here is the solution:


Make sure in your Master Page has this line:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
Or in your page if you are using a User Control.

Friday, January 23, 2009

The permissions granted to user 'Domain\UserName' are insufficient for performing this operation

I have faced a problem when I tried to open the Reporting Service Web Site for SQL Server 2005 running on Vista Ultimate.

As you can see from the title of the post the error was

"The permissions granted to user 'Domain\UserName' are insufficient for performing this operation"

I have installed SQL Server 2005 Service Pack 2 and grant the user the Administrator permissions and I still got this error!

Finally, you can just open the Internet Explorer with Right Click Run as Administrator and it will work.

Source: https://www.nilebits.com/blog/2009/02/the-permissions-granted-to-user-domainusername-are-insufficient-for-performing-this-operation/

Wednesday, January 21, 2009

Benefits of ASP.NET

ASP.NET provides many truly impressive benefits over classic ASP, mainly because it is built into the managed environment of the .NET Framework. ASP.NET applications are compiled, rather than interpreted, and they benefit from a sophisticated Class Framework that provides a powerful, extensible application programming interface (API).

At the risk of rehashing what others have already said, the benefits of ASP.NET are as follows:
  • ASP.NET applications are developed using compiled code and can be written using any .NET-compliant language, including VB .NET, C# .NET, and J# .NET. ASP.NET is built into the .NET Class Framework, so it benefits from an extensive API as well as all the benefits of the .NET managed environment, including type safety, multilanguage support, optimized memory management, and a just-in-time (JIT) compilation model that optimizes the availability and performance of the application at runtime.
  • ASP.NET provides an extensible architecture that you can easily customize. For example, every Web form can be programmatically accessed using the Page handler object. This object not only encapsulates the Web form, but it also provides direct access to the context objects, including the HttpRequest, HttpApplication, and HttpSessionState objects.
  • The ASP.NET runtime engine architecture is also designed for optimal performance and extensibility. The runtime engine executes in a separate process from Internet Information Server (IIS), and it delegates HTTP requests across a range of handler classes. These classes can be customized or even dropped from an application if they do not provide any benefit. Custom HTTP handlers process HTTP requests at the level of the ASP.NET runtime engine. You can write custom HTTP handlers using a greatly simplified API that avoids the complexity of traditional ISAPI modules.
  • ASP.NET provides several ways to cache page content, including full-page caching and partial-page caching (fragment caching). In addition, ASP.NET provides a Cache API for even more granular control over cached content.
  • ASP.NET provides sophisticated debugging and tracing abilities that make application troubleshooting easier than before. The .NET-compliant languages provide structured error handling and a large set of detailed exception classes. Errors will still occur in ASP.NET, of course, but you now benefit from more detailed exception reporting than when using classic ASP.
  • ASP.NET provides several methods for authenticating clients, including Windows authentication, Forms-based authentication, Passport-based authentication, and standard IIS authentication (impersonation).
Quoted.

Monday, January 19, 2009

Monitoring Performance

We have established that performance monitoring does not just begin in a vacuum. It begins once the developers and system architects have released the first version of an application and have collected initial performance measurements.

Presumably the team has also established its performance expectations for the Web application and already has a sense of whether additional optimization will be required. The initial baseline is a collection of measurements on specific performance factors. These factors combine to produce the overall responsiveness of the application on a specific hardware platform, and under a specific load.

Performance profiling involves three important steps:
  • Monitoring: Monitoring includes setting up the counters and traces to collect performance data and picking a sampling period.
  • Analysis: The monitoring data must be collected, analyzed for problems, and compared against the baseline data.
  • Loading, or stress testing: This involves a forceful ramping up of load on the Web application to observe how the performance metrics change in response. Monitoring by itself can be a passive activity where measurements are collected and analyzed, but the system is allowed to operate without interference by the development team.
Once performance profile reports have been generated and analyzed, the technical team may choose to optimize a specific area of the application or the system. Once the optimization has been implemented, the iterative cycle of profiling and optimization begins again for as long as it takes to bring the application performance within an acceptable range.

Optimization is never really completed. It continues for as long as necessary, or for as long as the team thinks that further performance improvements are possible. Ultimately, the goal is to bring performance to a level where the users are satisfied and to have the technical team feel satisfied they have delivered an application whose performance meets everyone's expectations.

Quoted.

Saturday, January 17, 2009

Measuring Application Performance

You can measure application performance by three broad measures:
  • Throughput: Throughput is the number of requests that a Web application can serve in a specified unit of time. Throughput is typically specified in requests per second.
  • Availability: Availability is the percentage of time a Web application is responsive to client requests.
  • Scalability: Scalability is the ability of a Web application to maintain or improve performance as the user load increases. Scalability also refers to the ability of an application to recognize performance benefits as server resources increase.
Throughput was already discussed, so let's explore the additional measures, availability and scalability, in more detail.

Assessing Availability

Of course, performance is not the only metric that matters for ASP.NET applications. Application availability is equally important and is defined as the percentage of time that an application is functional. Availability is in many ways a harder metric to quantify, compared to performance, because hardware issues factor into the equation more than software issues do.

The factors that affect application availability include the following:
  • Hardware: Web servers and database servers obviously have to remain running for the hosted Web application to stay available. Multiserver Web architectures are designed for fault tolerance, usually by providing redundancy and backup drives both for the application files and for the database files.
  • Load: Overtaxed systems are susceptible to failure if the user load exceeds what either the hardware or the software was designed to accommodate. Load can be anticipated through capacity planning and designed for at the software and the hardware levels.
  • Network latency: This factor refers to delays in the transmission of a request or a response on the network. Latency may result from congestion on the network. Alternatively, it may result from an inefficient network—one that requires too many jumps between the sender and the receiver. Network latency is controllable on a local area network (LAN) or a virtual private network (VPN), but it is out of your control over a public Internet connection.
  • Connection bandwidth: Application users on the public Internet may not have the same amount of connection bandwidth; for example, broadband users have "a lot," and dial-up users have "very little." It is hard to say much about this factor, given that it is typically out of the developer's control for a Web application on the public Internet. About the only good thing to say about this factor is that users tend to adjust their expectations in proportion to the type of connection they are using. In other words, dial-up users expect to wait longer, and cable modem users do not.
  • Software: Software issues typically affect the performance of an application, rather than its availability. However, code that causes an application to become unstable and crash is an important factor in application availability. In general, code that is thread-safe is stable and unlikely to crash. Thread-safe code is much easier to write with .NET because the managed runtime environment enforces both a common type system and a range of rules that promote thread safety. Keep in mind, though, that calling COM+ components from .NET code is potentially unstable because COM+ components execute outside of the managed execution environment.
The most common way to quantify availability is by uptime, which is the percentage of time that an application is responsive and functional. There is no typical acceptable uptime percentage for an application. The acceptable number is the one that all parties agree is reasonable and can be committed to in a legal contract. Companies typically like to see more than 99-percent uptime, excluding scheduled downtime.

On an annual basis, this number is not as unreasonable as it might appear to be. For example, 99-percent uptime translates to a whopping 88 hours, or roughly two standard work weeks per year of downtime. In economic terms, a high-traffic e-commerce Web application can lose a lot of revenue in this time period, particularly if it falls around a heavy shopping period such as Christmas. Figure 1 illustrates a sampling of downtime in hours, based on percentage uptimes.

Figure 1: Application availability

Assessing Scalability

The third of the big three metrics, after throughput and availability, is scalability. Many people confuse this factor with performance. The two are related, but they are not the same. Scalability is the ability of the application to perform under ever-increasing loads. A Web application is considered non-scalable if its performance falls below established standards once the load starts to increase.

Scalability also has a lesser-known definition, which is of particular importance to developers. Namely, scalability is the ability of an application to fully utilize the full processing power of a machine as it increases the number of processors. By this definition, an application that performs well on a two-processor machine is considered non-scalable if a four-processor machine fails to improve application performance by any significant amount.

This definition speaks to a number of low-level issues, such as the ability of both the application and the machine to work effectively using multiple threads. The .NET Framework provides sophisticated thread management capabilities and makes it easier than before to write thread-safe code. However, you may not achieve scalability simply by using the Framework's out-of-the-box thread management abilities. The schematic chart shown in Figure 2 illustrates two applications deployed on machines with increasing numbers of processors (X-axis).

The Y-axis indicates the requests per second that the applications are able to process. In this example, the number of users, or the load, is assumed to remain the same. The chart illustrates that Application #1 experiences much smaller performance gains than Application #2 as the number of processors increases. This implies that Application #2 is more scalable than Application #1.

Even so, neither application experiences any performance improvements in moving from a four-processor to an eight-processor machine. Scalability is clearly a parameter that is relative rather than absolute. Application #2 is more scalable than Application #1 as long as the number of processors remains fewer than eight. Application #2 may perform better than Application #1 on an eight-processor machine; however, it is no more scalable than Application #1 at this number of processors.


Figure 2: Application scalability

The .NET Framework provides other out-of-the-box features that may enhance scalability as an application experiences increasing loads. For example, the ADO.NET managed data providers implement connection pooling for database connections, which at face value would appear to always be a good thing. This is not so if your application uses dynamic connection strings, where pooling may actually hinder performance for a specific connection even while helping performance on others.

So, although the .NET Framework provides the tools for enhancing scalability, you need the smarts of a developer to take full advantage of them. Keep in mind that scalability works in two dimensions: up and out. Traditional scalability actually refers to scaling up, meaning that your application must accommodate increasing load on a fixed set of servers.

In this model, the processing is distributed across the same set of servers regardless of whether the load is high or low. Scaling out refers to applications designed to run across a server farm, where multiple servers collaborate to share the burden of increasing load. In this model, when the application load is low, just a few of the available servers handle the processing.

As the load increases, additional servers take on the burden, which effectively increases the available capacity of the system. Thus, architects and developers have a two-fold challenge in designing for scalability: designing their applications to both scale up and scale out. ASP.NET facilitates this effort by making certain management features less dependent on specific servers.

For example, you can easily configure state management to run across several servers in a farm. In addition, XML-based configuration files make it easier to point applications to distributed resources. One final note on the topic of scalability: Throwing hardware at an application usually buys you performance gains, but this approach should complement other measures, not replace them. For example, doubling the memory on a Web server will certainly result in immediate performance gains for many kinds of Web applications, but this will do nothing to address bottlenecks that may exist at the processor level or at the database level.

The database server is, after all, an equally important partner to the Web server in terms of its influence on scalability. Similarly, scaling out with additional Web servers will buy you perceived performance gains because more servers are now available to share the processing load. But, again, this approach will not address processor-level bottlenecks. Worse yet, if your application is experiencing memory leaks, then by scaling out additional servers you have essentially increased your problem by transferring an existing issue from a small number of servers to a larger number of servers.

Hardware considerations are an important aspect of designing a high-performance Web application. The hardware configuration is critical for maintaining high reliability and availability for an application. Basically, do not focus on hardware considerations at the expense of application-level issues because in the long-term, scalability will not benefit, even if short-term scalability does.

Quoted.

Thursday, January 15, 2009

Profiling ASP.NET Application Performance

Performance profiling is a complicated process that requires developers to run through repeated iterations of profiling and optimization. There are a number of factors affecting performance that operate independently and so must be tackled independently, often by different groups of people.

Developers must tackle application coding issues by identifying the offending code blocks and then rewriting or optimizing them. System administrators must tackle server resource problems by examining the full set of tasks that the server is handling. Performance profiling has to be a cooperative task between different groups of people because at the end of the day, a user who experiences slow performance will be unhappy with the experience, regardless of whose piece of the puzzle is causing the issue. Performance profiling affects everyone on the technical team, and so it must engage everyone as well.

Performance profiling is a time-dependent activity because application performance changes over time. Performance may change periodically throughout the day or as the load on the application fluctuates. Alternatively, application performance may experience degradation over a long period of time, particularly as the database grows. Performance profiling must begin with a baseline, which is a set of metrics that define the performance at a specific time, under a specific set of conditions.

The baseline is the measure against which future performance will be compared. A baseline may also be referred to as a benchmark. A baseline includes a description of both hardware and software, and it typically includes a range of performance numbers that were derived without changing the hardware configuration.

This is an example of a baseline description:
  • The Web application runs on a single, dual-processor 400MHz server with 512MB of RAM. The application was subjected to an increasing load of 10 to 100 concurrent users with five threads. The application serves between 18 and 45 requests per second, diminishing with an increasing number of concurrent users. Response times for 10 concurrent users varied between 0.8 and 3.5 seconds, depending on the page. The Home page, which is cached, responds in 0.8 seconds, and more intensive screens take longer to deliver to the client.
Figure 1 shows the baseline application performance's throughput, and Figure 2 shows the baseline application performance's response time.

Figure 1: Baseline application performance—throughput

Figure 2: Baseline application performance—response time

These graphs focus on just two metrics—namely, throughput and response time. A complete baseline should contain data for several metrics but should always include basic metrics such as throughput and response time because these most directly affect the user's experience with the application.

Quoted.

Tuesday, January 13, 2009

Performance Tuning and Optimization for ASP.NET Applications Serise

I am going to post a series of articles that explain how can you tune and optimize your web application.
The purpose of this article series is to provide you with a primer on the concepts and terminology of application performance and optimization.

For many of you, these are concepts that are familiar, but with which you are not necessarily comfortable. Application performance issues are often paid little attention in smaller Web applications with low hit counts. However, these issues start to feel more important as your application starts experiencing higher hit counts and heavier loads. It is an unfortunate reality that application performance behaves in a nonlinear way.

To borrow a phrase from the stock market, this means that current (application) performance is not an indicator of future results. Loosely speaking, performance refers to an application's ability to service user requests under varying load conditions. Performance is measured by multiple kinds of metrics, including throughput and response time, to name a few. Performance benchmarks describe the goal you are trying to achieve. Important benchmark indicators include scalability and availability.

Optimization refers to fine-tuning an application for performance, based on your goals and the expected load on the application. Optimization is generally an iterative process where you apply your knowledge of the technology to address bottlenecks in performance.

The iterative aspect of the process comes about through a cycle of testing and tweaking until you have achieved your performance goals for the application. We cover all of these topics in detail, both later in this chapter as well as throughout this book.

Application performance issues are often ignored until the last minute, precisely when the application may already be experiencing heavy loads. Applications should be designed for optimum performance as a forethought, not as an afterthought. Optimization, by its nature, is a process that applies to a completed application. It is also an ongoing process that does not stop once the application is in production. So, there are different aspects of designing for performance.

Quoted.

Sunday, January 11, 2009

Understanding the ASP.NET Architecture Part - 4

View State


View state is an ASP.NET feature that allows a Web page to retain all of its state values between requests to the Web server. In classic ASP, developers are forced to handle this task manually, which is a tedious and time-consuming process.

Consider one example, where the user fills out a form with several HTML input text controls.
Once the form is submitted to the server, the ASP engine retrieves the control contents from the HTTP headers and processes the business logic.

When the page returns to the client, the control values (including user input) are not retained, unless the developer has manually rehydrated the control values using embedded server-side code within the HTML form.

This issue is a regular headache for form-based applications because the server commonly needs to return control values to the client for a second look—for example, if the user input fails server-side validation.
Input controls represent one of the simpler problems.

Drop-down list controls pose a more complicated problem because they contain multiple values and include a user-selected value. Typically, drop-down lists need to be hydrated just once, and they must then retain the same values and user selections.

Classic ASP requires you to manually repopulate the select box contents between postings to the server.
This requirement is not just tedious for the developer, but it can also be expensive for the server, especially if additional database lookups are required.

View state enables all controls on a Web page to retain their state values between successive postbacks to the server. Furthermore, view state preserves a control's properties between postbacks. For example, if a control's Visible property is set to "False," then the control will remain invisible between successive postbacks.

In short, view state saves developers time by reducing the amount of coding they have to do. In addition, view state improves application performance by eliminating the database calls that would otherwise be needed to rehydrate controls between postbacks.

How View State Works


View state is stored using encoded key-value pairs within a hidden form field. Every control on the page is represented using one or more pairs. For view state to work, a Web page must contain a server-side form (with the runat=server attribute) so that a hidden _VIEWSTATE field may be added directly below the opening tag.

(A server-side form is provided by default when you create a new Web page). View state only works for server-side controls contained within the server-side form. In fact, server-side controls will generate compilation errors if they are added to a page outside of their server-side form.

For a Web page with view state enabled, the view state process works as follows:
  1. The client requests a Web page.
  2. The server renders the page, including a _VIEWSTATE hidden field that contains encoded key-value pairs for the properties and values of every control on the page.
  3. The client enters information into the rendered HTML controls and then posts the page back to the server.
  4. The server initializes the page and then tracks the client's changes to control state. The server executes the business logic on the page and then renders the page. The server controls get rehydrated using the latest values stored in the _VIEWSTATE hidden field, which is updated to include the new, client-specified values and selections.
You manage view state using the StateBag class, which is a member of the System.Web.UI namespace.
The Page object provides access to a StateBag object through its ViewState property.

The StateBag class implements a number of interfaces, including the IDictionary and IEnumerable interfaces, which allow you to enumerate and modify the view state contents. Table 3 describes important members of the StateBag class.

Table 3: The StateBag Class Members
CLASS MEMBER
DESCRIPTION
Add
This method adds a new StateItem object to the StateBag object or updates an existing StateItem object value if it is already included in the StateBag class. The StateItem object represents a view state name-value pair.
Remove
This method removes an object from the StateBag object.
Keys
This property gets a collection of (enumerable) keys that represent the items in the StateBag object.
Item
This property gets or sets the value of an object in the StateBag object.

You can add or modify view state items from the Web form code-behind file up until the Pre-Render stage of the page's lifecycle, which is just before the page starts rendering HTML. For example, you can append custom view state values that are distinct from any of the page controls:
protected override void OnPreRender(EventArgs e)
{
 // Store custom ViewState values
 ViewState("Key1") = "MyValue1";
 ViewState.Item("Key2") = "MyValue2";  // Alternate notation
}

In fact, you can add any object to view state, as long as it supports binary serialization. Serialization is the process by which binary data is converted into a stream of bytes in order to be saved to a storage medium. Serializable objects, such as the DataView object, implement the ISerializable interface.

Non-serializable objects, such as the ListItem object, do not implement this interface. As you can see, you can add serializable objects to view state easily:
// ViewState will store any serializable object
DataView dv;
ViewState.Add["Mydv", dv];
Retrieving objects from view state is just as simple:
if (!Page.IsPostBack)
{
 // Retrieve the DataView object from ViewState
 sqlDV = (DataView)ViewState["Mydv"];
}

Persisting View State Across Multiple Pages


In special cases, view state may persist values across multiple pages, not just across postbacks for the same page. Consider an application with two Web pages, where each page contains a TextBox server control named TextBox1.

If Page 1 submits to Page 2, then the TextBox control on Page 2 will pick up the view state for the Page 1 TextBox control. You can use this behavior to your advantage if you want to persist common information across multiple pages, as long as the Web application uses posting between pages, rather than hyperlinks.

For example, consider an application where every page contains a Label server control for persisting the client's login name. This is the design view for Page 1:

In the code-behind file, you would assign the login name to the Label control:
public class ViewState : System.Web.UI.Page
{
    protected System.Web.UI.WebControls.Label lblUserName;

    protected void Page_Load(object sender, EventArgs e)
    {
        if (!Page.IsPostBack)
        {
            // Assign Username to label (Username is hardcoded for demo purposes)
            lblUserName.Text = "DemoUser";
        }
    }
}
Then on subsequent Pages 2, 3, and so forth, you can automatically pick up the Label value from view state by doing just two things:
  1. Add a Label server control named "lblUserName".
  2. Add a variable declaration for the label in the code-behind file for the server control.
The only "coding" you need to do is to ensure that every page declares the server control variable:
protected System.Web.UI.WebControls.Label lblUserName;
The inverse behavior also applies: The view state will clear for all server controls that are not repeated between pages. Custom view state values will only persist for as long as the client continues to post back the same page.

Disabling View State


ASP.NET server controls have their view state enabled by default, but you can disable it for those controls that do not need to retain their values between postbacks. For example, DataGrid controls that are rehydrated on every postback donot need to retain their view state.

You can disable view state for a server control in two ways:
  • At design-time, by setting its EnableViewState attribute equal to "False":
  • At runtime, although make sure you do so the first time the page loads:
lblUserName.EnableViewState = false;
You can also disable view state for an entire page using the @ Page directive:
<%@ Page Language="C#" EnableViewState="false" CodeFile="Default.aspx.cs" Inherits="_Default" %>
Finally, you can disable view state for an entire application using the Web.config configuration file. To do so, simply modify the configuration element's enable ViewState attribute value:

Keep in mind that when view state is completely disabled, you will have to do a lot of work to manage state manually.

Quoted.