Deliver on Software as a Service
Deliver on Software as a Service.
Support ECMA standards for C# and CLI.
Extend all existing software transparently.
Access databases easily with ADO.NET
Reduce cost of ownership around application deployment.
Protect investments with advanced security.
Use next-generation data access designed for Web-scale applications.
The .NET Framework was built for delivering software as a service, so it is built on XML and the SOAP family of integration standards. Simply annotate method calls, and the .NET Framework turns them into full XML Web services. SOAP is being standardized by the W3C.
Supporting ECMA Standards for C# and CLI
Standards are core to delivering software as a service. Consequently, Microsoft has submitted the specifications for the C# programming language and a subset of the .NET Framework called the Common Language Infrastructure (CLI) to ECMA, which is standardizing them. These specifications are the cooperative effort of six other ECMA partners, including Hewlett-Packard and Intel. The standardization process is currently about halfway completed.
Device Support for Broad Reach
The Microsoft .NET Compact Framework is a version of the .NET Framework for rapidly building and securely deploying and running distributed XML Web services and applications on smart devices, such as cellular telephones, enhanced televisions, and PDAs. It provides a highly productive, standards-based, multilanguage environment for integrating existing investments with next-generation applications and services as well as the agility to solve the challenges of deployment and operation of Internet-scale applications. The .NET Compact Framework consists of three main parts: the common language runtime, a hierarchical set of unified class libraries, and a set of profiles for smart device categories.
Extend All Existing Software Transparently
The .NET Framework is designed to integrate with your existing software, enabling you to take advantage of all of your existing development investments without replacing them. For example, all of your existing COM components are .NET Framework components automatically, and any .NET Framework component you create is also a COM component.
Access Databases Easily with Microsoft ADO.NET
Nearly all applications need to query or update persisted data, whether in simple files, relational databases, or any other type of store. To fulfill this need, the .NET Framework includes ADO.NET, a data access subsystem optimized for n-tier environments. For easy interoperability, ADO.NET uses XML as its native data format, and it is interoperable with XML and XML documents. As the name implies, ADO.NET evolved from ADO (ActiveX Data Objects), and it builds on the huge library of ODBC drivers already available.
Designed for loosely coupled environments, ADO.NET provides data access services for scalable Web-based applications and services. ADO.NET provides high-performance stream APIs for connected as well as disconnected data models, which are more suitable for returning data to Web applications.
As you develop applications, you will have different requirements for working with data. In some cases, you might simply want to display data on a form. In other cases, you might need to devise a way to share information with another company.
No matter what you do with data, you need to understand certain fundamental concepts about the data approach in the .NET Framework. You might never need to know some of the details of data handling—for example, you might never need to directly edit an XML file containing data—but it is very useful to understand the data architecture, what the major data components are, and how the pieces fit together.
Disconnected Data Architecture
In traditional two-tier applications, components establish a connection to a database and keep it open while the application is running. For a variety of reasons, this approach is impractical in many applications:
Open database connections take up valuable system resources. The overhead of maintaining these connections impacts overall application performance.
Applications that require an open database connection are extremely difficult to scale. An application that might perform acceptably with four users will likely not do so with a hundred. Web applications in particular need to be easily scalable, because traffic to a Web site can go up by orders of magnitude in a very short period.
In Web applications, the components are inherently disconnected from each other. The browser requests a page from the server; when the server has finished processing and sending the page, it has no further connection with the browser until the next request. Under these circumstances, maintaining an open connection to a database is not viable, because there is no way to know whether the data consumer (the client) requires further data access.
A model based on connected data can make it difficult to share data between components, especially components in different applications. If two components need to share the same data, both components have to be connected, or a way must be devised for the components to pass data back and forth.
For all these reasons, data access in ADO.NET is designed around a disconnected architecture. Applications are connected to the database only long enough to fetch or update the data. Because the database is not hanging on to connections that are largely idle, it can service many more users.
Data Is Cached in Datasets
Far and away the most common data task is to retrieve data from the database and do something with it: display it, process it, or send it to another component. Very frequently, the application needs to process not just one record, but a set of them: a list of customers or today's orders, for example. Often the set of records that the application requires comes from more than one table: my customers and all their orders; all authors named "Smith" and the books they’ve written; and other, similar, sets of related records.
Once these records are fetched, the application typically works with them as a group. For example, the application might allow the user to browse through all the authors named "Smith" and examine the books for one Smith, then move to the next Smith, and so on.
In a disconnected data model, it's impractical to go back to the database each time the application needs to process the next record. (Doing so would undo much of the advantage of working with disconnected data in the first place.) The solution, therefore, is to temporarily store the records retrieved from the database and work with this temporary set.
This temporary dataset is a cache of records retrieved from the database. It works like a virtual data store—it includes one or more tables based on the tables in the actual database (or databases), and it can include information about the relationships between those tables and constraints on what data the tables can contain.
The data in the dataset is usually a much-reduced version of what's in the database. However, you can work with it in much the same way you work with the real data. While you are doing so, you remain disconnected from the database, which frees it to perform other tasks.
Because you often need to update data in the database (although not nearly as often as you retrieve data from it), you can perform update operations on the dataset, and these are written through to the underlying database.
An important point is that the dataset is a passive container for data. To actually fetch data from a database and (optionally) write it back, you use a data adapter. A data adapter contains the instructions for how to populate a single table in the dataset and how to update the corresponding table in the database. The instructions are methods that encapsulate Structured Query Language (SQL) statements, such as a reference to a stored procedure. Thus, the Fill method may invoke a SQL statement such as SELECT au_id, au_lname, au_fname FROM authors that runs whenever the method is called.
Data Is Persisted as XML
Data needs to be moved from the data store to the dataset, and from there to various components. In ADO.NET, the format for remoting data is XML.
When data needs to be persisted outside of the database (for example, into a file), it is stored as XML. If you have an XML file available, you can use it like any data source and create a dataset out of it.
In fact, in ADO.NET, XML is the fundamental format for sharing data. When you share data, the ADO.NET APIs automatically create XML files or streams out of information in the dataset and send them to another component. The second component can invoke similar APIs to read the XML back into a dataset.
Why XML? There are several reasons:
XML is an industry-standard format. This means that your application data components can exchange data with any other component in any other application, as long as that component understands XML. Many applications are being written to understand XML, which provides an unprecedented level of exchange between disparate applications.
XML is text-based. The XML representation of data uses no binary information, which enables XML to be sent through any protocol, such as HTTP. Most firewalls block binary information, but by formatting information in XML, components can still easily exchange the information.
Interoperability. ADO.NET enables easy creation of custom XML documents through the use of XSD schemas. The resulting XSD schemas format the XML specific for your use.
Do you need to know XML in order to share data in ADO.NET? No. ADO.NET automatically converts data into and out of XML as needed; you interact with the data using ordinary programming methods.