Sunday, April 24, 2011

Lots of customers ask if their server is down, will they be switched to another server so that their websites/applications are not down. If the customer is a layman, he/she will always expect the host to provide them automatic failover so that their websites are not down. However, they are not aware of the costs they have to incur for such a solution. However, the hosts can still provide them a simple failover solution with a fraction of the cost. Here is a simple way to create a Windows failover solution assuming that the server has websites running on IIS and using MS SQL server databases.

1. IIS Mirroring

Here is an URL explaining how to setup IIS Mirroring

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/81f04967-f02f-4845-9795-bad2fe1a1687.mspx?mfr=true

2 . Data Mirroring

For Data mirroring, you can use Robocopy which is available with Windows 2003 Resource kit for Windows 2003 and is built-in with Windows 2008. You can create a batch file on the primary server which will mirror the data to the backup server using Robocopy. It is done using the /MIR switch of Robocopy which keeps exactly the same copy of the source server in the backup server. If a file or a directory is deleted from the source server, it will be removed from the destination server too. You simply have to run the batch file once wherein it will do a full backup and for all the new runs, it will be incremental. You can schedule the batch file to run every few minutes, say around 15 minutes since it will copy only the new data and the changed data. However, for a shared hosting server hosting a few hundred websites, you may have to increase the interval.

3. MS SQL Mirroring

MS SQL server has built-in option for database mirroring. You can visit http://sqlserver-training.com/how-to-perform-sql-server-mirroring-manual-failover to setup database mirroring.

4. DNS Failover

There are a few companies like DNSMadeEasy which provide DNS failover. You can buy a DNS failover package from such companies which can be useful for DNS failover if the primary server goes down.

The above method definitely saves cost, but still you need to deploy multiple server(s) to achieve this. However, compared to hardware available for Mirroring and Failover the cost is very less. After all, you cannot have everything free.

Tuesday, September 7, 2010

Advantages of .NET 4.0 Framework

The new features and improvements are described in the following sections:

Programming Languages
Common Language Runtime (CLR)
Base Class Libraries
Networking
Web
Client
Data
Communications
Workflow


The .NET Framework 4.0 introduces a new programming model for writing multithreaded and asynchronous code that greatly simplifies the work of application and library developers. The new model enables developers to write efficient, fine-grained, and scalable parallel code in a natural idiom without having to work directly with threads or the thread pool. The new Parallel and Task classes, and other related types, support this new model. Parallel LINQ (PLINQ), which is a parallel implementation of LINQ to Objects, enables similar functionality through declarative syntax. For more information, see Parallel Programming in the .NET Framework.

Performance and Diagnostics

In addition to the following features, the .NET Framework 4.0 provides improvements in startup time, working set sizes, and faster performance for multithreaded applications.

ETW Events

You can now access the Event Tracing for Windows (ETW) events for diagnostic purposes to improve performance. For more information, see the following topics:

Performance Monitor (Perfmon.exe) now enables you to disambiguate multiple applications that use the same name and multiple versions of the common language runtime loaded by a single process. This requires a simple registry modification. For more information, see Performance Counters and In-Process Side-By-Side Applications.

Code Contracts

Code contracts let you specify contractual information that is not represented by a method's or type's signature alone. The new System.Diagnostics.Contracts namespace contains classes that provide a language-neutral way to express coding assumptions in the form of pre-conditions, post-conditions, and object invariants. The contracts improve testing with run-time checking, enable static contract verification, and documentation generation.

The applicable scenarios include the following:

  • Perform static bug finding, which enables some bugs to be found without executing the code.
  • Create guidance for automated testing tools to enhance test coverage.
  • Create a standard notation for code behavior, which provides more information for documentation.

Lazy Initialiation

With lazy initialization, the memory for an object is not allocated until it is needed. Lazy initialization can improve performance by spreading object allocations evenly across the lifetime of a program. You can enable lazy initialization for any custom type by wrapping the type inside a System..::.Lazy<(Of <(T>)>) class.

Dynamic Language Runtime

The dynamic language runtime (DLR) is a new runtime environment that adds a set of services for dynamic languages to the CLR. The DLR makes it easier to develop dynamic languages to run on the .NET Framework and to add dynamic features to statically typed languages. To support the DLR, the new System.Dynamic namespace is added to the .NET Framework. In addition, several new classes that support the .NET Framework infrastructure are added to the System.Runtime.CompilerServices namespace. For more information, see Dynamic Language Runtime Overview.

In-Process Side-by-Side Execution

In-process side-by-side hosting enables an application to load and activate multiple versions of the common language runtime (CLR) in the same process. For example, you can run applications that are based on the .NET Framework 2.0 SP1 and applications that are based on .NET Framework 4.0 in the same process. Older components continue to use the same CLR version, and new components use the new CLR version. For more information, see Hosting Changes in the .NET Framework 4.

Interoperability

New interoperability features and improvements include the following:

  • You no longer have to use primary interop assemblies (PIAs). Compilers embed the parts of the interop assemblies that the add-ins actually use, and type safety is ensured by the common language runtime.
  • You can use the System.Runtime.InteropServices..::.ICustomQueryInterface interface to create a customized, managed code implementation of the IUnknown::QueryInterface method. Applications can use the customized implementation to return a specific interface (except IUnknown) for a particular interface ID.

Profiling

In the .NET Framework 4.0, you can attach profilers to a running process at any point, perform the requested profiling tasks, and then detach. For more information, see the [IClrProfiling::AttachProfiler]IClrProfiling Interface::AttachProfiler Method method.

Garbage Collection

The .NET Framework 4.0 provides background garbage collection; for more information, see the entry So, what's new in the CLR 4.0 GC? in the CLR Garbage Collector blog. 

Covariance and Contravariance

Several generic interfaces and delegates now support covariance and contravariance. For more information, see Covariance and Contravariance in the Common Language Runtime.

Base Class Libraries

The following sections describe new features in collections and data structures, exception handling, I/O, reflection, threading, and Windows registry.

Collections and Data Structures

Enhancements in this area include the new System.Numerics..::.BigInteger structure, the System.Collections.Generic..::.SortedSet<(Of <(T>)>) generic class, and tuples.

BigInteger

The new System.Numerics..::.BigInteger structure is an arbitrary-precision integer data type that supports all the standard integer operations, including bit manipulation. It can be used from any .NET Framework language. In addition, some of the new .NET Framework languages (such as F# and IronPython) have built-in support for this structure.

SortedSet Generic Class

The new System.Collections.Generic..::.SortedSet<(Of <(T>)>) class provides a self-balancing tree that maintains data in sorted order after insertions, deletions, and searches. This class implements the new System.Collections.Generic..::.ISet<(Of <(T>)>) interface.

The System.Collections.Generic..::.HashSet<(Of <(T>)>) class also implements the ISet<(Of <(T>)>) interface.

Tuples

A tuple is a simple generic data structure that holds an ordered set of items of heterogeneous types. Tuples are supported natively in languages such as F# and IronPython, but are also easy to use from any .NET Framework language such as C# and Visual Basic. The ..NET Framework 4.0 adds eight new generic tuple classes, and also a Tuple class that contains static factory methods for creating tuples.

Exceptions Handling

The .NET Framework 4.0 class library contains the new System.Runtime.ExceptionServices namespace, and adds the ability to handle corrupted state exceptions. 

Corrupted State Exceptions

The CLR no longer delivers corrupted state exceptions that occur in the operating system to be handled by managed code, unless you apply the HandleProcessCorruptedStateExceptionsAttribute attribute to the method that handles the corrupted state exception.

Alternatively, you can add the following setting to an application's configuration file:

legacyCorruptedStateExceptionsPolicy=true

I/O

The key new features in I/O are efficient file enumerations, memory-mapped files, and improvements in isolated storage and compression.

File System Enumeration Improvements

New enumeration methods in the Directory and DirectoryInfo classes return IEnumerable<(Of <(T>)>) collections instead of arrays. These methods are more efficient than the array-based methods, because they do not have to allocate a (potentially large) array and you can access the first results immediately instead of waiting for the complete enumeration to occur.

There are also new methods in the static File class that read and write lines from files by using IEnumerable<(Of <(T>)>) collections. These methods are useful in LINQ scenarios where you may want to quickly and efficiently query the contents of a text file and write out the results to a log file without allocating any arrays.

Memory-Mapped Files

The new System.IO.MemoryMappedFiles namespace provides memory mapping functionality, which is available in Windows. You can use memory-mapped files to edit very large files and to create shared memory for inter-process communication. The new System.IO..::.UnmanagedMemoryAccessor class enables random access to unmanaged memory, similar to how System.IO..::.UnmanagedMemoryStream enables sequential access to unmanaged memory.

Isolated Storage Improvements

Partial-trust applications, such as Windows Presentation Framework (WPF) browser applications (XBAPs) and ClickOnce partial-trust applications, now have the same capabilities in the .NET Framework as they do in Silverlight. The default quota size is doubled, and applications can prompt the user to approve or reject a request to increase the quota. The System.IO.IsolatedStorage..::.IsolatedStorageFile class contains new members to manage the quota and to make working with files and directories easier.

Compression Improvements

The compression algorithms for the System.IO.Compression..::.DeflateStream and System.IO.Compression..::.GZipStream classes have improved so that data that is already compressed is no longer inflated. This results in much better compression ratios. Also, the 4-gigabyte size restriction for compressing streams has been removed.

Reflection

The .NET Framework 4.0 provides the capability to monitor the performance of your application domains.

Application Domain Resource Monitoring

Until now, there has been no way to determine whether a particular application domain is affecting other application domains, because the operating system APIs and tools, such as the Windows Task Manager, were precise only to the process level. Starting with the .NET Framework 4.0, you can get processor usage and memory usage estimates per application domain.

Application domain resource monitoring is available through the managed AppDomain class, native hosting APIs, and event tracing for Windows (ETW). When this feature has been enabled, it collects statistics on all application domains in the process for the life of the process.

For more information, see the <appDomainResourceMonitoring> Element, and the following properties in the AppDomain class:

64-bit View and Other Registry Improvements

Windows registry improvements include the following:

Threading

General threading improvements include the following:

  • The new Monitor..::.Enter(Object, Boolean%) method overload takes a Boolean reference and atomically sets it to true only if the monitor is successfully entered.
  • You can use the Thread..::.Yield method to have the calling thread yield execution to another thread that is ready to run on the current processor.

The following sections describe new threading features.

Unified Model for Cancellation

The .NET Framework 4.0 provides a new unified model for cancellation of asynchronous operations. The new System.Threading..::.CancellationTokenSource class is used to create a CancellationToken that may be passed to any number of operations on multiple threads. By calling Cancel()()() on the token source object, the IsCancellationRequested property on the token is set to true and the token's wait handle is signaled, at which time any registered actions with the token are invoked. Any object that has a reference to that token can monitor the value of that property and respond as appropriate.

Thread-Safe Collection Classes

The new System.Collections.Concurrent namespace introduces several new thread-safe collection classes that provide lock-free access to items whenever useful, and fine-grained locking when locks are appropriate. The use of these classes in multi-threaded scenarios should improve performance over collection types such as ArrayList, and List<(Of <(T>)>).

Synchronization Primitives

New synchronization primitives in the System.Threading namespace enable fine-grained concurrency and faster performance by avoiding expensive locking mechanisms. The Barrier class enables multiple threads to work on an algorithm cooperatively by providing a point at which each task can signal its arrival and then block until the other participants in the barrier have arrived. The CountdownEvent class simplifies fork and join scenarios by providing an easy rendezvous mechanism. The ManualResetEventSlim class is a lock-free synchronization primitive similar to the ManualResetEvent class. ManualResetEventSlim is lighter weight but can only be used for intra-process communication. The SemaphoreSlim class is a lightweight synchronization primitive that limits the number of threads that can access a resource or a pool of resources at the same time; it can be used only for intra-process communication. The SpinLock class is a mutual exclusion lock primitive that causes the thread that is trying to acquire the lock to wait in a loop, or spin, until the lock becomes available. The SpinWait class is a small, lightweight type that will spin for a time and eventually put the thread into a wait state if the spin count is exceeded.

Networking

Enhancements have been made that affect how integrated Windows authentication is handled by the HttpWebRequest, HttpListener, SmtpClient, SslStream, NegotiateStream, and related classes in the System.Net and related namespaces. Support was added for extended protection to enhance security. The changes to support extended protection are available only for applications on Windows 7. The extended protection features are not available on earlier versions of Windows. For more information, seeIntegrated Windows Authentication with Extended Protection.

Web

The following sections describe new features in ASP.NET core services, Web Forms, Dynamic Data, and Visual Web Developer.

ASP.NET Core Services

ASP.NET introduces several features that improve core ASP.NET services, Web Forms, Dynamic Data, and Visual Web Developer. For more information, see What's New in ASP.NET and Web Development.

ASP.NET Web Forms

Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been made in this area for ASP.NET 4, including the following:

  • The ability to set meta tags.
  • More control over view state.
  • Easier ways to work with browser capabilities.
  • Support for using ASP.NET routing with Web Forms.
  • More control over generated IDs.
  • The ability to persist selected rows in data controls.
  • More control over rendered HTML in the FormView and ListView controls.
  • Filtering support for data source controls.

Dynamic Data

For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. This includes the following:

  • Automatic validation that is based on constraints defined in the data model.
  • The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project.

Visual Web Developer Enhancements

The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup code examples, and features a redesigned version of IntelliSense for JScript. In addition, two new deployment features called Web packaging and One-Click Publish make deploying Web applications easier.

Client

The following sections describe new features in Windows Presentation Foundation (WPF) and Managed Extensibility Framework (MEF).

Windows Presentation Foundation

In the .NET Framework 4.0, Windows Presentation Foundation (WPF) contains changes and improvements in many areas. This includes controls, graphics, and XAML.

For more information, see What's New in Windows Presentation Foundation Version 4.

Managed Extensibility Framework

The Managed Extensibility Framework (MEF) is a new library in the .NET Framework 4.0 that enables you to build extensible and composable applications. MEF enables application developers to specify points where an application can be extended, expose services to offer to other extensible applications, and create parts for consumption by extensible applications. It also enables easy discoverability of available parts based on metadata, without the need to load the assemblies for the parts.

For more information, see Managed Extensibility Framework. For a list of the MEF types, see the System.ComponentModel.Composition namespace.

Data

For more information, see What's New in ADO.NET.

Expression Trees

Expression trees are extended with new types that represent control flow, for example, LoopExpression and TryExpression. These new types are used by the dynamic language runtime (DLR) and not used by LINQ.

Communications

Windows Communication Foundation (WCF) provides the new features and enhancements described in the following sections.

Support for WS-Discovery

The Service Discovery feature enables client applications to dynamically discover service addresses at run time in an interoperable way using WS-Discovery. The WS-Discovery specification outlines the message-exchange patterns (MEPs) required for performing lightweight discovery of services, both by multicast (ad hoc) and unicast (using a network resource).

Standard Endpoints

Standard endpoints are pre-defined endpoints that have one or more of their properties (address, binding, contract) fixed. For example, all metadata exchange endpoints specify IMetadataExchange as their contract, so there is no need for a developer to have to specify the contract. Therefore, the standard MEX endpoint has a fixed IMetadataExchange contract.

Workflow Services

With the introduction of a set of messaging activities, it is easier than ever to implement workflows that send and receive data. These messaging activities enable you to model complex message exchange patterns that go outside the traditional send/receive or RPC-style method invocation.

Workflow

Windows Workflow Foundation (WF) in .NET Framework 4.0 changes several development paradigms from earlier versions. Workflows are now easier to create, execute, and maintain.

Workflow Activity Model

The activity is now the base unit of creating a workflow, instead of using the SequentialWorkflowActivity or StateMachineWorkflowActivity classes. The WorkflowElement class provides the base abstraction of workflow behavior. Activity authors implement WorkflowElement objects imperatively when they have to use the breadth of the runtime. The Activity class is a data-driven WorkflowElement object where activity authors express new behaviors declaratively in terms of other activity objects.

Richer Composite Activity Options

The Flowchart class is a powerful new control flow activity that enables authors to construct process flows more naturally. Procedural workflows benefit from new flow-control activities that model traditional flow-control structures, such as TryCatch and Switch.

Expanded Built-in Activity Library

New features of the activity library include the following:

  • Data access activities for interacting with ODBC data sources.
  • New flow control activities such as DoWhile, ForEach, and ParallelForEach.
  • Activities for interacting with PowerShell and SharePoint.

Enhanced Persistence and Unloading

Workflow state data can be explicitly persisted by using the Persist activity. A host can persist a WorkflowInstance without unloading it. A workflow can specify no-persist zones when working with data that cannot be persisted so that persistence is postponed until the no-persist zone exits.

Improved Ability to Extend WF Designer Experience

The new WF Designer is built on Windows Presentation Foundation (WPF) and provides an easier model to use when rehosting the WF Designer outside Visual Studio. It also provides easier mechanisms for creating custom activity designers. For more information, see Extending the Workflow Designer.

Tuesday, May 25, 2010

1. Always analyze the problem first. You should always check what the exact problem is and what is the effect.
2. Find the root cause of the problem by checking the relevant log files or the event viewer.
3. If there is any specific error or warning found at the particular time when the problem has occurred, then find details of the event. You can use Google effectively to search the reason by using the event log entry as the search string.
4. There may be multiple different causes found if the event log entry is generic. Try to find the relevant solution according to the server or the application.
5. Use tools like ping, nslookup, dig, telnet etc., as per the issue's nature.
6. Always post the issues you are facing in relevant discussion forums if the solution is not found.
7. If there is support available from the vendor, then make full use of the support facility to resolve the issue. The solution in this case may take some time, but in case when the solution is not found easily, you can always contact the vendor to get the issue resolved.


Using the above method, you can resolve lots of issues easily.

Tuesday, April 27, 2010

Slow Internet Connection in India

For the next few days, many of the internet users will face slow internet connection problems with their Broadband connections in India. Many of you may get packet losses while accessing your websites, emails or servers.

The disruption in the SEA-ME-WE 4 undersea submarine cable system, which links South East Asia and Europe, is likely to affect the high-speed Internet services in the country.

The submarine cable system suffered a fault near Italy and maintenance will be carried out for the next four days, which may cause some disruption in services.

The South East Asia–Middle East–Western Europe 4 (SEA-ME-WE 4) submarine communications cable system is a consortium of 16 companies (including Airtel and Tata Communications) and carries telecommunications between India.

For more detail please check the links below:

http://www.pluggd.in/sea-me-we-4-cable-maintenance-hits-internet-connection-in-india-297/

http://economictimes.indiatimes.com/infotech/internet/Undersea-cable-system-repair-may-hit-Internet-service-in-India-/articleshow/5855874.cms

Be assured that there is no issue at our Datacenter or with the servers and the slow access is only because of the issue stated above.

Sunday, April 18, 2010

Plesk install failed on Redhat EL 5.5

The other day I was trying to install Plesk 9.5.1 on a RHEL 5.5 box, but it constantly threw an error which I could not understand what it is. The installation was failing and spent most of the day to resolve it and install it successfully.

The error I got was:

Determining the packages that need to be installed.
ERROR: Unable to install the "psa-9.5.1-rhel5.build95100414.15.i586" package.
Not all packages were installed.
Please, contact product technical support.

After a lot of digging up, I could finally find the error in the file /tmp/autoinstaller3.log. It was trying to install Bind package on the server and there was already another version of Bind installed on the server which was installed with the OS. I removed the Bind package and tried installing again and voila!!! The installation was successful and I am happily configuring the other stuff in the server.

I would recommend everybody, if anyone encounters a similar issue, make sure to check the /tmp/autoinstaller3.log file and you will get the exact cause of the failure.

For queries, you can email me at nitaishonline at gmail dot com.

Tuesday, April 14, 2009

When Web 2.0 fails

In this Web 2.0 world, mashups are red hot. Take the data from Craigslist, add it to Google Maps, and you have a visual representation of property listings within your target area. Web sites are rushing to publish their APIs so that their products are included in this latest Web 2.0 craze. But Billy Hoffman, security researcher with SPI Dynamics, warned at this year's Black Hat Briefings in Las Vegas that such convenience can invite trouble for both the user and the Web site. I want to call attention to the way Web sites themselves can (and should) do more to protect themselves against JavaScript exploits using AJAX bridges to loot their assets.

Bridging domains
AJAX is short for Asynchronous JavaScript and XML. In the old-school Internet, a synchronous world, an initial request is made to a Web application by a user through an Internet browser and that request is served and downloaded to the user. But should the user want to change the request or create another, the user would have to wait while the second and third requests are served and downloaded. In the asynchronous world of AJAX, a single request made by a user through a browser begins a dialogue with the Web application server by downloading and caching the user's anticipated next moves. With AJAX, an attacker can autonomously inject script into pages on a target site, re-inject the same host with multiple XSSs (cross-site scripts) or send multiple requests using complex HTTP methods. With AJAX, the attack landscape has increased, especially if the Web server doesn't filter input from users.

By design, AJAX is limited to contacting only one host server; AJAX bridges, acting as proxies, allow third-party domain sites to be used. Hoffman used the fictional example of Billy's Bookstore, a traditional brick-and-mortar bookstore whose online site uses Amazon.com's API to transparently provide its customers with an extended book search. From Amazon's perspective, Billy's Bookstore makes all of the requests, not the individual users. Indeed, under AJAX, it's impossible for a Web application to tell whether or not a user typed in a request; AJAX is capable of making autonomous requests all on its own. This could open Amazon (if it's not careful) to potential attacks from Billy's Bookstore customers.

When bridges fail
Hoffman, in his talk at Black Hat, called out several security flaws with AJAX bridges. AJAX bridges do not, for example, authenticate input. AJAX bridges do, however, rely on other components for security (not always secure), and under AJAX, it's impossible to repudiate (deny) that a specific malicious request was made. With AJAX, a criminal could invisibly exploit the security weakness in one company to attack the assets of another company by making complex requests, such as access to databases within the second company, that cannot easily be traced back to the first company.

Let's say a third-party site starts detecting malicious SQL injection activity, and its Web applications start seeing harmful JavaScript inserted into a SQL database request string. The third-party site could block the Billy's Bookstore ISP, but what if Billy's Bookstore generates a significant amount of traffic? The criminals in this example have succeeded in causing a denial-of-service attack on Billy's Bookstore customers by denying them access to the third-party site. Also, the third-party site would suffer a loss in traffic from having blocked Billy's Bookstore.

What's a company to do?
Hoffman offered the following advice: If a company is thinking of going AJAX, it should consider what is gained and whether it is necessary to adopt AJAX. If so, the company should then document all current user inputs and ensure there's input validation on each. Further, it should minimise the program logic exposed to the public and implement input validation on all function input, as well. Hoffman recommends following established Web standards rather than using creative hacks to accomplish what is desired. Shortcuts only open more avenues for attack.

With companies rushing to Web 2.0-enable their sites, some established businesses are needlessly compromising their security by depending upon the security of others. Companies should give careful consideration before opening their APIs and should not rush to allow any and every possible connection with their site. Convenience on the Internet most often compromises security. Just because AJAX is currently sexy doesn't mean it's necessarily a good idea.

Sunday, January 25, 2009

What is Web 2.0?

The term "Web 2.0" describes the changing trends in the use of World Wide Web technology and web design that aim to enhance creativity, communications, secure information sharing, collaboration and functionality of the web. Web 2.0 concepts have led to the development and evolution of web culture communities and hosted services, such as social-networking sites, video sharing sites, wikis, blogs, and folksonomies. The term first became notable after the O'Reilly Media Web 2.0 conference in 2004. Although the term suggests a new version of the World Wide Web, it does not refer to an update to any technical specifications, but rather to changes in the ways software developers and end-users utilize the Web. According to Tim O'Reilly:

"Web 2.0 is the business revolution in the computer industry caused by the move to the Internet as a platform, and an attempt to understand the rules for success on that new platform."

O'Reilly has said that the "2.0" refers to the historical context of web businesses "coming back" after the 2001 collapse of the dot-com bubble, in addition to the distinguishing characteristics of the projects that survived the bust or thrived thereafter.

Tim Berners-Lee, inventor of the World Wide Web, has questioned whether one can use the term in any meaningful way, since many of the technological components of Web 2.0 have existed since the early days of the Web.

Definition

Web 2.0 encapsulates the idea of the proliferation of interconnectivity and interactivity of web-delivered content. Tim O'Reilly regards Web 2.0 as the way that business embraces the strengths of the web and uses it as a platform. O'Reilly considers that Eric Schmidt's abridged slogan, don't fight the Internet, encompasses the essence of Web 2.0 — building applications and services around the unique features of the Internet, as opposed to expecting the Internet to suit as a platform (effectively "fighting the Internet").

In the opening talk of the first Web 2.0 conference, O'Reilly and John Battelle summarized what they saw as the themes of Web 2.0. They argued that the web had become a platform, with software above the level of a single device, leveraging the power of "The Long Tail," and with data as a driving force. According to O'Reilly and Battelle, an architecture of participation where users can contribute website content creates network effects. Web 2.0 technologies tend to foster innovation in the assembly of systems and sites composed by pulling together features from distributed, independent developers. (This could be seen as a kind of "open source" or possible "Agile" development process, consistent with an end to the traditional software adoption cycle, typified by the so-called "perpetual beta".)

Web 2.0 technology encourages lightweight business models enabled by syndication of content and of service and by ease of picking-up by early adopters.

O'Reilly provided examples of companies or products that embody these principles in his description of his four levels in the hierarchy of Web 2.0 sites:

* Level-3 applications, the most "Web 2.0"-oriented, exist only on the Internet, deriving their effectiveness from the inter-human connections and from the network effects that Web 2.0 makes possible, and growing in effectiveness in proportion as people make more use of them. O'Reilly gave eBay, Craigslist, Wikipedia, del.icio.us, Skype, dodgeball, and AdSense as examples.

* Level-2 applications can operate offline but gain advantages from going online. O'Reilly cited Flickr, which benefits from its shared photo-database and from its community-generated tag database.

* Level-1 applications operate offline but gain features online. O'Reilly pointed to Writely (now Google Docs & Spreadsheets) and iTunes (because of its music-store portion).

* Level-0 applications work as well offline as online. O'Reilly gave the examples of MapQuest, Yahoo! Local, and Google Maps (mapping-applications using contributions from users to advantage could rank as "level 2", like Google Earth).

Non-web applications like email, instant-messaging clients, and the telephone fall outside the above hierarchy.

Web 2.0 websites allow users to do more than just retrieve information. They can build on the interactive facilities of "Web 1.0" to provide "Network as platform" computing, allowing users to run software-applications entirely through a browser. Users can own the data on a Web 2.0 site and exercise control over that data. These sites may have an "Architecture of participation" that encourages users to add value to the application as they use it. This stands in contrast to very old traditional websites, the sort which limited visitors to viewing and whose content only the site's owner could modify. Web 2.0 sites often feature a rich, user friendly interface based on Ajax, OpenLaszlo, Flex or similar rich media.

The concept of Web-as-participation-platform captures many of these characteristics. Bart Decrem, a founder and former CEO of Flock, calls Web 2.0 the "participatory Web" and regards the Web-as-information-source as Web 1.0.

The impossibility of excluding group-members who don't contribute to the provision of goods from sharing profits gives rise to the possibility that rational members will prefer to withhold their contribution of effort and free-ride on the contribution of others. According to Best, the characteristics of Web 2.0 are: rich user experience, user participation, dynamic content, metadata, web standards and scalability. Further characteristics, such as openness, freedom and collective intelligence by way of user participation, can also be viewed as essential attributes of Web 2.0.

Technology overview

The sometimes complex and continually evolving technology infrastructure of Web 2.0 includes server-software, content-syndication, messaging-protocols, standards-oriented browsers with plugins and extensions, and various client-applications. The differing, yet complementary approaches of such elements provide Web 2.0 sites with information-storage, creation, and dissemination challenges and capabilities that go beyond what the public formerly expected in the environment of the so-called "Web 1.0".

Web 2.0 websites typically include some of the following features/techniques. Andrew McAfee used the acronym SLATES to refer to them:

1. "Search: the ease of finding information through keyword search which makes the platform valuable.

2. Links: guides to important pieces of information. The best pages are the most frequently linked to.

3. Authoring: the ability to create constantly updating content over a platform that is shifted from being the creation of a few to being the constantly updated, interlinked work. In wikis, the content is iterative in the sense that the people undo and redo each other's work. In blogs, content is cumulative in that posts and comments of individuals are accumulated over time.

4. Tags: categorization of content by creating tags that are simple, one-word descriptions to facilitate searching and avoid rigid, pre-made categories.

5. Extensions: automation of some of the work and pattern matching by using algorithms e.g. amazon.com recommendations.

6. Signals: the use of RSS (Really Simple Syndication) technology to notify users with any changes of the content by sending e-mails to them."

Usage

Higher Education

Universities are using Web 2.0 in order to reach out and engage with generation Y and other prospective students according to recent reports. Examples of this are: social networking websites – YouTube, MySpace, Facebook, Youmeo, Twitter and Flickr; upgrading institutions' websites in gen Y-friendly ways – stand-alone micro-websites with minimal navigation; placing current students in cyberspace or student blogs; and virtual learning environments such as Moodle enable prospective students to log on and ask questions.

In addition to free social networking websites, schools have contracted with companies that provide many of the same services as MySpace and Facebook, but can integrate with their existing database. Companies such as Harris Connect, iModules and Publishing Concepts have developed alumni online community software packages that provide schools with a way to communicate to their alumni and allow alumni to communicate with each other in a safe, secure environment.

Government 2.0

Web 2.0 initiatives are being employed within the public sector, giving more currency to the term Government 2.0. Government 2.

Public diplomacy

Web 2.0 initiatives have been employed in public diplomacy for the Israeli government. The country is believed to be the first to have its own official blog, MySpace page, YouTube channel, Facebook page and a political blog. The Israeli Ministry of Foreign Affairs started the country's video blog as well as its political blog. The Foreign Ministry also held a microblogging press conference via Twitter about its war with Hamas, with Consul David Saranga answering live questions from a worldwide public in common text-messaging abbreviations. The questions and answers were later posted on Israelpolitik.org, the country's official political blog.

Web-based applications and desktops

Ajax has prompted the development of websites that mimic desktop applications, such as word processing, the spreadsheet, and slide-show presentation. WYSIWYG wiki sites replicate many features of PC authoring applications. Still other sites perform collaboration and project management functions. In 2006 Google, Inc. acquired one of the best-known sites of this broad class, Writely.

Several browser-based "operating systems" have emerged, including EyeOS and YouOS. Although coined as such, many of these services function less like a traditional operating system and more as an application platform. They mimic the user experience of desktop operating-systems, offering features and applications similar to a PC environment, as well as the added ability of being able to run within any modern browser.

Numerous web-based application services appeared during the dot-com bubble of 1997–2001 and then vanished, having failed to gain a critical mass of customers. In 2005, WebEx acquired one of the better-known of these, Intranets.com, for USD45 million.

Another example of a web-based service that didn't survive the dot-com bubble burst was Pets.com. Pets.com business model was flawed in that the products they were selling and delivering to customers' doorsteps had very thin margins and were expensive to ship.

Internet applications

Main article: Rich Internet application

XML and RSS

Advocates of "Web 2.0" may regard syndication of site content as a Web 2.0 feature, involving as it does standardized protocols, which permit end-users to make use of a site's data in another context (such as another website, a browser plugin, or a separate desktop application). Protocols which permit syndication include RSS (Really Simple Syndication — also known as "web syndication"), RDF (as in RSS 1.1), and Atom, all of them XML-based formats. Observers have started to refer to these technologies as "Web feed" as the usability of Web 2.0 evolves and the more user-friendly Feeds icon supplants the RSS icon.

Specialized protocols

Specialized protocols such as FOAF and XFN (both for social networking) extend the functionality of sites or permit end-users to interact without centralized websites.

Web APIs

Machine-based interaction, a common feature of Web 2.0 sites, uses two main approaches to Web APIs, which allow web-based access to data and functions: REST and SOAP.

1. REST (Representational State Transfer) Web APIs use HTTP alone to interact, with XML (eXtensible Markup Language) or JSON payloads;

2. SOAP involves POSTing more elaborate XML messages and requests to a server that may contain quite complex, but pre-defined, instructions for the server to follow.

Often servers use proprietary APIs, but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into wide use. Most communications through APIs involve XML or JSON payloads.

See also Web Services Description Language (WSDL) (the standard way of publishing a SOAP API) and this list of Web Service specifications.

Economics

The analysis of the economic implications of "Web 2.0" applications and loosely-associated technologies such as wikis, blogs, social-networking, open-source, open-content, file-sharing, peer-production, etc. has also gained scientific attention. This area of research investigates the implications Web 2.0 has for an economy and the principles underlying the economy of Web 2.0.

Cass Sunstein's book "Infotopia" discussed the Hayekian nature of collaborative production, characterized by decentralized decision-making, directed by (often non-monetary) prices rather than central planners in business or government.

Don Tapscott and Anthony D. Williams argue in their book Wikinomics: How Mass Collaboration Changes Everything (2006) that the economy of "the new web" depends on mass collaboration. Tapscott and Williams regard it as important for new media companies to find ways of how to make profit with the help of Web 2.0. The prospective Internet-based economy that they term "Wikinomics" would depend on the principles of openness, peering, sharing, and acting globally. They identify seven Web 2.0 business-models (peer pioneers, ideagoras, prosumers, new Alexandrians, platforms for participation, global plantfloor, wiki workplace).

Organizations could make use of these principles and models in order to prosper with the help of Web 2.0-like applications: "Companies can design and assemble products with their customers, and in some cases customers can do the majority of the value creation"."In each instance the traditionally passive buyers of editorial and advertising take active, participatory roles in value creation." Tapscott and Williams suggest business strategies as "models where masses of consumers, employees, suppliers, business partners, and even competitors cocreate value in the absence of direct managerial control". Tapscott and Williams see the outcome as an economic democracy.

Some other views in the scientific debate agree with Tapscott and Williams that value-creation increasingly depends on harnessing open source/content, networking, sharing, and peering, but disagree that this will result in an economic democracy, predicting a subtle form and deepening of exploitation, in which Internet-based global outsourcing reduces labor-costs by transferring jobs from workers in wealthy nations to workers in poor nations. In such a view, the economic implications of a new web might include on the one hand the emergence of new business-models based on global outsourcing, whereas on the other hand non-commercial online platforms could undermine profit-making and anticipate a co-operative economy. For example, Tiziana Terranova speaks of "free labor" (performed without payment) in the case where prosumers produce surplus value in the circulation-sphere of the cultural industries.

Some examples of Web 2.0 business models that attempt to generate revenues in online shopping and online marketplaces are referred to as social commerce and social shopping. Social commerce involves user-generated marketplaces where individuals can set up online shops and link their shops in a networked marketplace, drawing on concepts of electronic commerce and social networking. Social shopping involves customers interacting with each other while shopping, typically online, and often in a social network environment. Academic research on the economic value implications of social commerce and having sellers in online marketplaces link to each others' shops has been conducted by researchers in the business school at Columbia University.

Criticism

The argument exists that "Web 2.0" does not represent a new version of the World Wide Web at all, but merely continues to use so-called "Web 1.0" technologies and concepts. Techniques such as AJAX do not replace underlying protocols like HTTP, but add an additional layer of abstraction on top of them. Many of the ideas of Web 2.0 had already been featured in implementations on networked systems well before the term "Web 2.0" emerged. Amazon.com, for instance, has allowed users to write reviews and consumer guides since its launch in 1995, in a form of self-publishing. Amazon also opened its API to outside developers in 2002. Previous developments also came from research in computer-supported collaborative learning and computer-supported cooperative work and from established products like Lotus Notes and Lotus Domino.

In a podcast interview Tim Berners-Lee described the term "Web 2.0" as a "piece of jargon." "Nobody really knows what it means," he said, and went on to say that "if Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along."

Other criticism has included the term "a second bubble" (referring to the Dot-com bubble of circa 1995–2001), suggesting that too many Web 2.0 companies attempt to develop the same product with a lack of business models. The Economist has written of "Bubble 2.0." Venture capitalist Josh Kopelman noted that Web 2.0 had excited only 530,651 people (the number of subscribers at that time to TechCrunch, a Weblog covering Web 2.0 matters), too few users to make them an economically viable target for consumer applications. Although Bruce Sterling reports he's a fan of Web 2.0, he thinks it is now dead as a rallying concept.

Critics have cited the language used to describe the hype cycle of Web 2.0 as an example of Techno-utopianist rhetoric. Web 2.0 is not the first example of communication creating a false, hyper-inflated sense of the value of technology and its impact on culture. The dot com boom and subsequent bust in 2000 was a culmination of rhetoric of the technological sublime in terms that would later make their way into Web 2.0 jargon. Indeed, several years before the dot com stock market crash the then-Federal Reserve chairman Alan Greenspan equated the run up of stock values as irrational exuberance. Shortly before the crash of 2000 a book by Shiller, Robert J. Irrational Exuberance. Princeton, NJ: Princeton University Press, 2000. was released detailing the overly optimistic euphoria of the dot com industry. The book Wikinomics: How Mass Collaboration Changes Everything (2006) even goes as far as to quote critics of the value of Web 2.0 in an attempt to acknowledge that hyper inflated expectations exist but that Web 2.0 is really different.

Trademark

In November 2004, CMP Media applied to the USPTO for a service mark on the use of the term "WEB 2.0" for live events. On the basis of this application, CMP Media sent a cease-and-desist demand to the Irish non-profit organization IT@Cork on May 24, 2006, but retracted it two days later. The "WEB 2.0" service mark registration passed final PTO Examining Attorney review on May 10, 2006, and was registered on June 27, 2006. The European Union application (application number 004972212, which would confer unambiguous status in Ireland) remains currently pending after its filing on March 23, 2006.